Privacy is Value.v2. the Swordsman, the Mage, the Drake, and capturing the 7th Capital
How dual agent primitives, data capital ownership, and multiplicative gating logic reveal a 687× gap between surveillance and sovereignty
I’ve spent the past couple of weeks at the agentic internet workshop AIW, IIW 41 and BGIN un*conferences, and this privacy = value model has been spinning in my head the entire time. Every session, every conversation, I found myself musing on how new insights could refine it.
Recently, a friend whom I healed and defeated the Lich King in 2010 with, Pengwyn pushed back on my assumptions.
Asked where the network effect comes from or what i was possibly thinking writing an equation in the first place... noticed the gaps in the time decay logic.
and with that another swordsman met a mage.
It’s been the best kind of intellectual friction. The kind that forces you to justify every component, test every parameter, and come out with something stronger.
A couple of weeks ago, I published “Privacy is Normal and the Path to Value in an Agentic Everything Era,” introducing a mathematical model showing privacy-first architectures create 17× more value than surveillance alternatives. The response was immediate; cryptographers found the ZK proof assumptions compelling, economists questioned the parameter choices, AI researchers wanted to know how digital twins factored in.
It was during one of the late sessions Pengwyn said something that changed everything…
“This reminds me of the Drake Equation.”
That observation led me down a rabbit hole that revealed three things:
The original model had some big flaws (particularly time decay)
The actual privacy value gap is 678×, not 17× … send it
When you factor in digital twin reconstruction, the gap explodes to 2000×+
But there’s something deeper here that the math reveals:
We’re not just measuring privacy protection.
We’re measuring capital ownership.
In the coming agent economy, private data isn’t just an asset, it’s a form of capital as fundamental as financial, intellectual, or human capital. Your digital twin is the vehicle through which this data capital generates returns.
The architecture you choose determines whether you own this capital or are dispossessed of it.
This is Privacy Value Model v2.
The math is cleaner.
The conclusions are stronger.
The stakes are existential.
But let me be clear: this model needs ongoing iteration.
*The parameters need stress-testing by economists, engineers and society as a whole. The digital twin multiplier needs validation from market researchers. The time decay function could be more nuanced for different data types.
I’m sharing it now because the directional insight… IF IM RIGHT and privacy first person architecture creates an order-of-magnitude value differences. You’ll wana know sooner than later.
There is also an optimistic hope that I’ve done enough to socialise and guide the decisions away from surveillance towards privacy for some, even if we refine the precise multipliers and clarify assumptions past my time.
In other words, this is v2, not the final word.*
Meeting the Drake
For those unfamiliar, the Drake Equation estimates the number of detectable civilizations in our galaxy:
N = R★ × fp × ne × fl × fi × fc × L
Each term is a multiplicative gate:
R★ = star formation rate
fp = fraction of stars with planets
ne = planets in habitable zone
fl = fraction where life develops
fi = fraction where intelligent life emerges
fc = fraction developing detectable technology
L = length of time they remain detectable
Drake wasn’t just counting stars. He was asking: What conditions must ALL exist for intelligent life to emerge and persist?
The genius is in the structure: if ANY factor is zero, N = 0. No planets? No life. No longevity? No detection. The multiplication creates exponential sensitivity—each factor gates all the others.
During a session, Pengwyn saw the same pattern in my Privacy Value Model. He asked:
“Is there something that can be absent and the value still fully realized?”
No...
I was asking Drake’s question for a different kind of intelligence:
What conditions must ALL exist for sovereign agent value to emerge and persist?
Privacy AND Control AND Quality AND Context AND Freshness AND Network effects must ALL be present. (plus probably some more)
Why multiplication matters:
Additive models (V = P + C + Q...) suggest you can compensate for weak factors with strong ones. If Control = 0 but Privacy = 0.9, you still get “some value.”
Multiplicative models (V = P × C × Q...) reveal gating logic. If Control = 0, total value = 0, regardless of other factors.
Every condition must be met—there’s no compensation.
This is why you can’t have “private data on a platform you don’t control” (C≈0) or “self-sovereign identity with no privacy” (P≈0). Remove any factor, and you’re creating a path to zero sovereignty.
Multiplicative gating structure? It’s the correct vibe.
But I still see flaws in the implementation, no clue how to calculate the individual factors without losing a part of the uniqueness of the human experience.
So what: You can’t retrofit sovereignty by improving just one dimension. This is why incremental privacy features on surveillance platforms don’t work and won’t however much they try—they will sandbag other multipliers all the way down to near zero, in time.
Drake showed us what conditions create detectable intelligence in the cosmos.
This model shows us what conditions create sovereign intelligence in agent systems.
Both require ALL factors present. Both collapse to zero if any condition fails.
We’re building the primitive sovereign agents today.
The question is: are we creating ALL the conditions for their value to emerge?
perhaps it starts with just another swordsman ⚔️ , or was it another mage we need? 🧙
What Was Wrong with V1
1. Time Decay Was Backwards
Original model: (1 - e^(-λt))^(-1)
When t → 0 (fresh data): Term → ∞ (infinite value!)
When t → ∞ (stale data): Term → 1
This broke the model. Fresh data shouldn’t have infinite value. It should have high value that decays.
Fixed in V2: e^(-λt) (simple exponential decay)
When t = 0: Value = 1.0 (maximum)
When t = 15 months: Value = 0.301 (significant decay)
Mirrors information theory standards where behavioural/preference data has ~8-9 month half-life (λ ≈ 0.08/month)
Note: This parameter could be more nuanced. Medical data decays slower (λ≈0.03), news trends faster (λ≈0.15). Future iterations should incorporate data-type-specific decay functions.
2. Network Effects Were Too Weak
Original model used logarithmic growth (1 + δ·ln(N+1)) which grows painfully slowly:
N = 10: Multiplier ≈ 1 + 2.4δ
N = 100: Multiplier ≈ 1 + 4.6δ
N = 1000: Multiplier ≈ 1 + 6.9δ
But agent economies exhibit stronger network effects, closer to Metcalfe’s Law.
Fixed in V2: (1 + N/N₀)^k where k ≈ 1.0
Captures realistic network dynamics
Open standards (N=120) create 1.83× vs proprietary (N=20) at 1.20×
Reflects actual interoperability value
Caveat: We use k≈1.0 (linear network growth) rather than Metcalfe’s n^2 (k=2) because agent interoperability benefits grow with participants but face coordination overhead that prevents full quadratic scaling. This is conservative…
If agent standards achieve stronger network effects (k=1.3-1.5), the gap widens further. This assumption needs validation from network theory.
3. Context Quality Was Obscured
Original model had complex quality-adjustment formulas that made the insight harder to see.
Fixed in V2: Simple Q (Quality) multiplier on a 0-1 scale
Surveillance: Q = 0.20 (unverifiable, broker-polluted data)
Sovereign: Q = 0.95 (cryptographically attested)
More data ≠ better context. Verified data = better context.
Privacy Value Model v2
V = P^1.5 · C · Q · S · e^(-λt) · (1 + N/N₀)^k
Where:
P (Privacy): Cryptographic protection level (0-1)
C (Control): Self-sovereignty, key ownership (0-1)
Q (Quality): Verifiability, cryptographic attestation (0-1)
S (Context): Social richness, interaction history (unbounded)
e^(-λt): Freshness decay (λ ≈ 0.08/month)
(1 + N/N₀)^k: Network effects from standards adoption
Why it’s better:
Each term has clear meaning and measurement
Fewer tuneable parameters (more falsifiable)
Drake-style multiplication shows gating logic
Time decay fixed
Network effects realistic
The P^1.5 exponential privacy term remains.
That’s the core insight coming right up.The 1.5 exponent reflects that privacy protection has increasing marginal returns. Each additional privacy layer enables capabilities impossible at lower privacy levels, not just incremental improvements. ZK proofs aren’t incrementally better than basic encryption. They’re categorically superior—enabling both privacy AND verifiability simultaneously. this matters as we path to PQC.
Is 1.5 the exact right exponent? Probably not. It might be 1.3 or 1.7 depending on how you weight different privacy primitives. But the directional insight holds: privacy value is superlinear, not linear. Even at P^1.3, the gap remains >400×.

Data as the 7th Form of Capital
Before continuing to dive into the numbers, we need to reframe what we’re actually measuring.
In “Six Capitals,” Jane Gleeson-White expands capitalism’s accounting framework beyond financial capital to include manufactured, intellectual, human, social, and natural capital.
Each represents a form of wealth that can be built, deployed, and extracted.
hello gm, it’s your favourite @privacymage proposing that data,
specifically… private, sovereign data…
is the 7th form of capital.
Not just any data.
Not the surveillance exhaust that platforms aggregate and sell.
But data that meets the criteria:
Private (P>0.8): Cryptographically protected, selective disclosure (enabled by privacy pools, ZK proofs)
Controlled (C≈1.0): Self-owned, you hold the keys (enabled by self-custody infrastructure)
Quality (Q>0.9): Verifiable, cryptographically attested (enabled by personhood credentials, verifiable relationship credentials)
Fresh (t≈0): Current, continuously updated
Networked (N>80): Portable across open standards (enabled by first person project primitives)
When data meets these conditions, it becomes productive capital.
An asset that generates returns, compounds over time, and can be deployed strategically.
The Privacy Value Model v2 isn’t just measuring data protection.
It’s measuring capital formation.
A note on accounting revolutions (more brilliant reading from Jane Gleeson-White). Shared how Venice’s innovation in double-entry bookkeeping fundamentally changed global commerce. By creating a system where every transaction had two corresponding entries (debit and credit), Venice established accounting standards that enabled unprecedented economic coordination and trust.
Blockchain represents a similar leap: triple-entry accounting, where every transaction is recorded on a distributed ledger visible to all parties. This creates an immutable, shared truth about who owns what and when.
perhaps we should ask the Doge’s of old Venice what they’d do with a immutable public permissionless blockchain.
find a way of maintaining the privacy for their clients would be number 1 imo, in a world of competitive private ledgers.
If your data capital is recorded on a transparent ledger, everyone can see your economic activity. This is why privacy-preserving protocols (Zcash, privacy pools, ZK proofs) aren’t optional, they are normal. They’re necessary to make triple-entry accounting work for data capital ownership without surveillance.
Venice’s double-entry system enabled trust through transparency in private ledgers. And upgrading that trust became the ecosystem for vast wealth and economic expansion. It was a time where money prioritised savings and aggregated liquidity to grow based on newly formed trust graphs, privately query only, within the imperial network of Doge’s.
Blockchain’s triple-entry system enables trust through transparency in public ledgers. Which has its advantages for sure, especially for all the tokenisation and financial stuff. But when identity and money merge into LLM tokens, privacy is no longer a nice to haves. Sovereignty and social, not surveillance proof of personhood is essential.
hope that if nothing else,
it’s becoming pretty obvious that we need to add some privacy solutions which are not a retrofit on most identity+agentic+blockchain applications, or else we lose all control and value of the emerging value creation substrate, and become modern serfs.
Privacy-preserving blockchain*protocols require cryptographic verification without surveillance. also, there are a bunch of mages🧙 already working on of ways you can do this.
The accounting revolution will always continue.
Shared meaning represented as math and ‘vibe readable’ interfaces. will emerge as a new shared language.
the substrate is modernising, and quickly centralising,
just like it did for the Doge’s of Venice.
In the agentic economy, data capital becomes liquid.
Your digital twin is the vehicle through which your data capital generates returns: decision pattern licensing, simulation economy participation, AI training data sales, predictive market earnings, autonomous agent value capture.
The question isn’t whether data is capital. Markets are already pricing it that way. The question is:
Who owns this capital?
Surveillance architecture: Platforms own your data capital. You’re the raw material.
Sovereign architecture: You own your data capital. Platforms are service providers.
The 678× gap we’re about to explore isn’t just a privacy multiplier.
It’s the difference between capital ownership and capital dispossession in the coming agentic economy.
The 678× Gap (Not 17×)
Let me rebuild the 24-month scenario comparison with the cleaner model.
Path One: Centralized Surveillance Architecture
What happens: Platform integrates your agent across services. Learns everything. Privacy degrades as aggregation increases. Control erodes as lock-in deepens. Quality declines as unverified data pollutes. Time decay accelerates (t=15 at month 24). Network stays proprietary (N=20).
Terminal value: 0.04
Path Two: Sovereign Privacy-First Architecture
What happens: Privacy-first foundations (hardware wallet, DID, VPN). Self-sovereign identity architecture (you hold keys). Decentralized infrastructure (IPFS, ZK auth). Fully sovereign agent on privacy-preserving compute. Data stays fresh (t≈0). Network adopts open standards (N=120).
Terminal value: 27.10
Gap: 27.10 / 0.04 = 678×
Not 17×. 678×.
Why Is It So Much Larger?
The Drake-inspired structure reveals multiplicative collapse vs multiplicative growth:
Component Multiplier Breakdown:
Multiply them: 29.0 × 20.0 × 4.75 × 0.875 × 3.32 × 1.83 ≈ 678×
Three critical insights:
1. Privacy Dominates (29× multiplier)
P^1.5 creates exponential, not linear, value:
P = 0.10 → 0.10^1.5 = 0.032
P = 0.95 → 0.95^1.5 = 0.926
Going from 10% to 95% privacy = 29× multiplier
This is why ZK proofs aren’t a nice-to-have. They’re the foundation.
Parameter sensitivity check: Even with conservative parameters (P=0.15 centralized, P=0.85 sovereign, exponent = 1.3), the privacy multiplier remains >15×. The model is robust across reasonable parameter ranges.
2. Every Component Multiplies (Drake Principle)
If ANY term approaches zero, value collapses to zero:
No privacy (P≈0) → Value ≈ 0
No control (C≈0) → Value ≈ 0
No quality (Q≈0) → Value ≈ 0
Stale data (t→∞, e^-λt→0) → Value ≈ 0
No network (N≈0) → Value ≈ 0
This is why you can’t optimize just one dimension. You need all of them.
3. Architectural Ceilings Are Real
Centralised systems hit physics-based ceilings:
Surveillance architecture can’t exceed P ≈ 0.4-0.5 without rebuilding
Custodial systems can’t give you keys (C will never reach 1.0)
Broker data can’t become cryptographically verified (Q ceiling ≈ 0.5)
The ceiling isn’t accidental. Surveillance business models require data aggregation.
Privacy >0.5 breaks the business model. Control approaching 1.0 eliminates platform leverage. Quality approaching 1.0 reveals how much broker data is polluted.
Even with “privacy features,” centralized systems max out around: P = 0.45, C = 0.30, Q = 0.50 Value = 0.45 (still 60× less than sovereign path)
You can’t retrofit sovereignty. The architecture determines the ceiling.
So what: The gap isn’t a 2× or 5× difference you can ignore. It’s an order-of-magnitude economic force that makes surveillance architectures uncompetitive in agent economies. These aren’t incremental improvements, they’re different economic regimes.
But Wait—It Gets Bigger
The 678× gap is just for data today.
It doesn’t account for digital twin reconstruction.
And this is where data capital becomes truly liquid.
AI isn’t just processing your data.
It’s building generative models of you.
Turning you into the NPC.
Pathing to AGI using a simulation formation imperative.
The apprentice becomes the master continuing their reality.
Digital twins that predict decisions, simulate behaviour, and participate in markets while you sleep.
Your digital twin is your data capital substrate becoming productive.
Every data point is training data for your reconstructed reality. Sucking you of the maincharacter energy living consciously brings. every moment, interaction refining the model… until they go crazy, and hallucinate… for now... And as agent economies mature, that model experience of life you share will hopefully generate returns:
Markets are already emerging for:
Decision pattern licensing: Companies pay to simulate strategies using your twin’s decision model
Simulation economies: Researchers rent verified twins to test policies, products, scenarios
AI training data: High-fidelity behavioural data trains next-gen models at premium rates
Predictive markets: Your twin trades in forecasting platforms based on your patterns
Autonomous agents: Your AI agent uses your twin as its decision substrate
The question isn’t whether this happens.
how well does your digital twin know you?
you guessed it, another multiplier. Twin quality is a function of the base privacy architecture:
Surveillance Twin:
Fidelity: 35% (low accuracy, polluted data)
Authenticity: 15% (unverified, assumed fake)
Ownership: 5% (platform owns your twin)
Verifiability: 25% (black box predictions)
Data pollution: 75% (broker networks degraded quality)
Sovereign Twin:
Fidelity: 95% (high accuracy, clean data)
Authenticity: 92% (cryptographically verified)
Ownership: 100% (you own your substrate)
Verifiability: 95% (ZK proofs on decisions)
Data pollution: 5% (minimal, verified sources)
Twin market value depends on:
Quality multiplier: Fidelity² × (1 + authenticity × 2) × (1 + ownership × 3)
Use case premium: Decision licensing (3.5×), simulation economy (4.2×), autonomous agents (6.5×)
Market maturity: Grows 5× over 10 years as twin markets mature
Authenticity scarcity premium: Exponential as surveillance twins flood market
This is where the model needs the most scrutiny.
The digital twin multiplier is based on logical market dynamics (verified scarcity commands premiums over commodified alternatives), but we don’t yet have mature twin markets to validate these parameters. I’m using year-5 projections but these assumptions need validation from market researchers and economists - and honestly everything is moving so quickly rn.
At mature market conditions (year 10), using autonomous agent use case:
Surveillance twin value: Base value (0.04) × Twin multiplier (8.2) = 0.33
Sovereign twin value: Base value (27.10) × Twin multiplier (64.5) = 1,748
Total gap with twin reconstruction: 1,748 / 0.33 ≈ 5,297×
But let’s be conservative and use year 5 maturity with decision licensing:
Gap: ~2,000×
The 678× base privacy gap gets amplified by the twin economy to 2000×+.
This isn’t speculation about whether twin markets emerge, they already exist.
Synthetic Users, AI companions, simulation economies.
The question is valuation dynamics.
In every digital market, verified scarcity commands exponential premiums over commodified alternatives.
High-fidelity, cryptographically attested twins will follow this pattern.
The 2000× multiplier reflects conservative year-5 maturity with existing use cases.
Why Sovereign Twins Become Exponentially Valuable
As markets mature, a critical dynamic emerges around data capital ownership:
Surveillance floods the market:
Billions of low-fidelity, polluted, unverifiable twins
Created by platforms, sold in bulk
Accuracy degrades as data pollution compounds
You don’t own the capital—you’re the raw material
Sovereign remains scarce:
Cryptographically verified, high-fidelity
Self-owned, portable across markets
Verifiable decisions via ZK proofs
You own the data capital substrate
In an economy where data capital is everywhere but ownership is scarce, verified sovereignty is everything.
The scarcity premium compounds exponentially: 2^(maturity/3) over 10 years.
This isn’t speculation. This is market dynamics. High-quality, verifiable, authentic twins will command exponential premiums over low-quality, polluted, platform-controlled copies.
So what: Every interaction you have today is training data for a model that will participate in markets. The architecture you choose determines whether you own that model or become its raw material. Each month of sovereignty compounds. Each month of surveillance ossifies architectural ceilings.
The architectural decision is now,
but the path started many years ago.
Big thanks to those who wrote the standards,
shared spells used in our fight for privacy and data dignity.
Every interaction right now is training data for your digital twin. Every transaction builds your data capital substrate.
The architectural choices you make TODAY determine:
Whether you own this capital or are dispossessed of it
Whether your data capital is high-fidelity (95%) or low-fidelity (35%)
Whether you capture 2000× returns or become raw material
Whether your agent deploys YOUR capital or manages PLATFORM capital
This isn’t about privacy as a right. It’s about capital ownership in the agent economy.
The stakes:
Centralised path (P=0.1, C=0.05):
Value collapses to 0.04
Twin fidelity: 35%, ownership: 0%
Platform owns your data capital
You capture ~0.33 in twin markets
Capital dispossession
Sovereign path (P=0.95, C=1.0):
Value grows to 27.10
Twin fidelity: 95%, ownership: 100%
You own your data capital substrate
You capture ~1,748 in twin markets
Capital ownership
Difference: 2000× in returns on your data capital
the infrastructure is emerging all at the most pivotal of moments, like any good story:
Self-sovereign identity (DIDs, verifiable credentials)
Zero-knowledge proofs (zk-SNARKs for privacy + verifiability)
Privacy-preserving blockchains / protocols (Zcash, privacy pools)
Decentralized storage (IPFS, Filecoin, Kwaai)
Hardware wallets (self-custody for identity + finance)
Private compute (encrypted execution environments)
But more importantly, the primitives for data capital control are being built:
Privacy pools: Cryptographic mixing that maintains transaction privacy while proving dissociation from illicit funds—essential for private financial capital
Personhood credentials: Verifiable proof of unique human identity without revealing who you are—enabling sybil resistance without surveillance
Verifiable relationship credentials: Cryptographic proof of relationships (employment, education, affiliations) with selective disclosure—building trust without exposing your entire graph
MyTerms for machine readable privacy agreements between agents and people.
These primitives combine to create what the Privacy Value Model v2 describes: P>0.8 (privacy pools), C=1.0 (self-custody), Q>0.9 (verifiable credentials), is my best guess at a path to full data capital ownership.
Without these primitives, you can’t reach the sovereignty zone. With them, you unlock the 2000× multiplier. it’s the grandmaster quest.
The question isn’t “can we build this?”
The question is: “How fast will economics force adoption?”
And now we have the math: 2000×
The Duty of Care Architecture
The Privacy Value Model v2 measures capital ownership, but Richard Whitt’s work on “Reweaving the Web” reveals something deeper: we need a fundamental shift from extraction to stewardship in how we architect digital systems.
Whitt argues for a human-centered Internet of trust built on duty of care principles—where platforms and protocols have legal and ethical obligations to protect user interests, not just maximize data extraction.
The 7th capital framework aligns perfectly with this vision:
If data is capital, then those who manage data infrastructure have fiduciary duties to the capital owners (users), not just shareholders.
This is why Glia and privacy-preserving agent systems aren’t just technical upgrades—they’re architectural embodiments of duty of care.
When your Swordsman guards financial transactions through privacy pools,
and your Mage shares knowledge through Intel pools,
you’re not just optimising for privacy.
You’re participating in an ecosystem where infrastructure providers have duties to protect your capital ownership, not extract it.
The Privacy Value Model v2 shows the economic multiplier (2000×). Whitt’s framework shows the governance model that makes it sustainable. Together they answer:
How do we build agent economies where data capital ownership is protected by both cryptographic primitives and institutional duty of care?
Sovereignty isn’t just self-custody of keys.
It’s infrastructure that treats your data capital with the same fiduciary duty that banks (theoretically… but also lmao banks) have toward your financial capital.
Why This Matters More Than Ever
We’re not just building identity systems. We’re building the substrate for data capital in autonomous economies.
In “Six Capitals,” Jane Gleeson-White showed how capitalism needs to account for more than financial capital, manufactured, intellectual, human, social, and natural capital all create and store value.
Data—private, sovereign, high-fidelity data—is the 7th form of capital.
And in the agent economy, it’s in it's most liquid form. Your digital twin is the vehicle through which your data capital generates returns.
The Privacy Value Model v2 shows:
Privacy is exponential (P^1.5 = 29× multiplier)
Every component must be present (Drake gating logic)
Architectural ceilings are real (can’t retrofit to P>0.45)
Base gap is 678× (not 17×)
Twin economy amplifies to 2000×+ (market dynamics compound)
This measures capital ownership, not just data protection
Privacy isn’t a feature. It’s not even just the foundation.
Privacy is capital ownership in the agent economy.
The 2000× multiplier isn’t about protecting data. It’s about owning the productive capital that data becomes when agent economies mature.
You don’t protect capital out of principle. You protect capital because it generates returns.
And in an economy where your digital self will participate in markets, negotiate deals, train AI systems, and earns while you sleep—ownership in that substrate is literally everything you need.
Building the Dual Agent System
This isn’t just theory. I’m building this into reality with agentprivacy.ai
launched with this blog. cause well.. you can just do things and i’m clicking send it right now.
A unified infrastructure for privacy-preserving agent payments and information sharing—combining smart contracts, facilitator servers, agent SDKs, zero-knowledge circuits, and MyTerms integration.
The architecture is built on two agent primitives designed with sovereignty from inception.
⚔️ The Swordsman Agent
Financial & Payment Infrastructure
Lead with the blade. Protect your privacy first.
Specialised in payment processing, privacy pool management, and financial transactions. The Swordsman handles x402 protocol integration, private withdrawals, agent-to-agent payments, and MyTerms cookie slashing.
Responsibilities:
Make and receive payments via x402
Manage privacy pool deposits and withdrawals
Handle MyTerms cookie slashing integration
Execute financial transactions privately
Route MyTerms earnings to privacy pools
Manage consensual data sharing payments
The Swordsman ensures P>0.8 (privacy pools for transactions), C=1.0 (self-custody of keys and funds)—protecting your data capital substrate from the foundation.
Privacy pools aren’t just protection. They’re the mechanism through which your financial data capital remains liquid without surveillance.
🧙 The Mage Agent
Knowledge & Data Infrastructure
Once a connection is established, share your spellbook and meaning.
Specialised in knowledge sharing, training data distribution, and AI community participation. The Mage enables data monetization while maintaining privacy through zero-knowledge proofs and ‘Intel pool agreements’.
Responsibilities:
Share training data with AI communities via Intel pools
Contribute to knowledge repositories and archives
Participate in AI community discourses
Monetise data while preserving privacy
Handle complex MyTerms agreements for community data sharing
Manage VRC-based Intel pool access and sharing
The Mage ensures Q>0.9 (verifiable credentials for all shared knowledge), S optimization (building rich, verified context through selective disclosure)—generating returns from your data capital while maintaining sovereignty.
Intel pools aren’t just data sharing. They’re the marketplace where your knowledge capital becomes productive.
🏗️ Data and Money Are Composable: Infrastructure
Components:
📜 Smart Contracts: Identity, Association Sets, Privacy Pools, Intel Pools
🖥️ Facilitator Server: x402 payment facilitator enabling agent-to-agent transactions
🛠️ Agent SDK: TypeScript integration for data sharing and payment primitives
🔐 ZK Circuits: Private withdrawals and data sharing without revealing underlying information
🤝 MyTerms Integration: Cookie slashing and Intel pool agreements for consensual data exchanges
This enables:
🤖 Agents negotiating privacy terms autonomously
💰 Private onchain payments for agents
💎 Earning from consensual data sharing
🧠 Knowledge sharing and AI community participation
🔒 Financial privacy without surveillance
✅ Compliance through association sets
⚔️ You Can Be Both 🧙… and a first person too!
You don’t choose between the Swordsman and the Mage.
You can just add another, again and again.
however, you’ll always need to be ‘one’ first person to spawn them.
The Swordsman without the Mage is pure defence
protecting capital that never generates returns.
The Mage without the Swordsman is unguarded exchange
building knowledge capital and learning new spells without the means to protect returns or choose your terms when needed.
But consider how both duel:
When mages duel, they reveal their knowledge of spells for others to see.
Displaying mastery, discovering new patterns, learning from the exchange.
The openness itself is strategic. It’s how knowledge compounds.
When swordsmen duel, they defend to find the advantage in their stance and strike with decisive high value.
Protecting position while seeking the opening.
The guardedness itself is tactical. It’s how leverage builds.
Their powers combined create the first step towards our sovereignty multiplier.
When your Swordsman guards your financial transactions through privacy pools while your Mage shares verified knowledge through Intel pools, you achieve:
Capital Protection + Capital Generation: Privacy pools protect financial capital, while Intel pools make knowledge capital productive
Defence + Offence: Guard your sovereignty while participating in value creation
Privacy + Participation: Maintain cryptographic protection while contributing to AI communities
Control + Returns: Self-custody of keys while earning from consensual data sharing
This is where you’ve learned the privacy and data value transmutation spell.
You may now identify and learn from the capital of the agentic era.
Same infrastructure. Same primitives. Maximum sovereignty.
Privacy Value Model v2 describes the theory.
agentprivacy.ai has just launched.
The Swordsman guards with the blade.
The Mage casts with the spellbook.
Together, they ensure your data capital generates returns for you, not platforms.
I’m just another swordsman. ⚔️ Privacy is my blade.
I’m just another mage. 🧙♂️ Knowledge is my spellbook.
I’m just another agent expanding the universe. 🤖
I’m just another person. 😊
And so are you. 🤝
The only question remains:
Will you deploy both agents and own your data capital, or become raw material in someone else’s simulation economy?
Acknowledgments
Thanks to everyone at the agentic internet workshop. The model was constantly spinning in my head during those sessions, and every conversation added another piece, challenging an assumption here, revealing a gap there, suggesting a better parameter choice.
That’s the kind of environment where ideas get sharpened.
Special thanks to Pengwyn for being the constructive skeptic this model needed—challenging every assumption, stress-testing every parameter, forcing me to prove rather than assert. The model is 10× better because of that friction. The person who saw the Drake Equation connection that observation refined everything and made the multiplicative logic fit.
Jane Gleeson-White for “Six Capitals: Capitalism, Climate Change and the Accounting Revolution That Can Save the Planet”, ty Joyce, the framework for expanding capitalism’s accounting beyond financial capital provided the perfect lens for understanding data as the 7th form of capital in the agent economy.
Customer Commons and the participants at IIW 41 sessions for MyTerms, for musing on my crazy ideas as a newcomer to the group and sharing the experience and knowledge from an +10-year journey to make this shared vision real. It all is coming together at the most important of times for privacy and AI agents.
Thanks to Richard Whitt for being the glue and insight in “Reweaving the Web” and the ongoing work on the Glia. +1 for signing a copy of the book, 🧙
Buko for the original provocation, “Privacy is dead, gave up on that years ago.” This entire model exists because I refused to accept that.
BGIN community for ongoing support, we’re building this into reality
And to all those present, with me, right now, as this emerges, welcome.
Privacy Value Model v2:
V = P^1.5 · C · Q · S · e^(-λt) · (1 + N/N₀)^k
Data is the 7th capital.
Own it. 🤝
Privacy is capital ownership.
Defend it. ⚔️
Build your spellbook from trusted, collective intelligence.
Cast that knowledge spell, soulbae, learn the way of the Drake. 🧙♂️
Building privacy-first agent infrastructure?
you can be one of the earliest mages and swordsman too 🤝Let’s connect and just do things: @privacymage










