The Moat Does Not Disappear. It Moves.
A response to Chamath Palihapitiya's 'The Collapse of Terminal Value'
Chamath Palihapitiya recently published a note titled 'The Collapse of Terminal Value' that has circulated widely in finance and tech circles. Someone sent it to me and asked what I thought. I read it carefully. Then I read it again.
Here is my honest assessment. Chamath gets some things right. He is correct that AI will reprice some industries. He is correct that the equity risk premium will likely be structurally higher for longer than the post-2008 era conditioned us to expect. He is correct that the comfortable long-run upward drift of equity markets is now something you have to earn rather than assume. Those things are worth taking seriously.
But the piece goes considerably further. It argues that AI will disrupt industries so broadly and so fast that no company can project free cash flows beyond five years, that terminal values collapse across the entire economy simultaneously, that equity markets reprice by 75 percent, that venture capital ceases to function, and that sovereign capital becomes the only institution capable of long-duration thinking. That version of the argument is built on assumptions that do not hold up when you examine them one by one.
Three things drive my skepticism: the infrastructure to run AI at that scale does not yet exist, humans are not the rational actors the thesis requires them to be, and governments will intervene in ways that slow, fragment, and redirect this transition in ways the piece does not account for. Let me take each in turn.
The Infrastructure Is Not There Yet
The terminal value collapse thesis requires AI to be disrupting industries now, at scale, across the entire economy. But the physical infrastructure required to run AI at that scale does not yet exist and cannot be conjured quickly.
Goldman Sachs estimates roughly 122 gigawatts of data center capacity online globally by end of 2030.¹ RAND's analysis under exponential growth scenarios projects AI data center demand reaching 327 gigawatts by the same year.² McKinsey estimates that even if every currently known buildout plan is delivered on time, the United States alone faces a supply deficit of more than 15 gigawatts by 2030.³ Grid connection requests in key markets are already taking four to seven years. Gas power plants that have not yet contracted equipment will not come online until the 2030s.
This is not a financing constraint. It is a physics and permitting constraint. The energy required to run the AI that is supposed to be disrupting everything is the same energy we already do not have enough of.
Chamath's piece simultaneously argues that AI will compress the terminal value of energy companies and that energy infrastructure is the safe harbor where capital flees when AI destroys value elsewhere. Both cannot be true. The energy constraint does not just slow the timeline. It is internally inconsistent with the thesis itself.
The energy analogy he uses to support the duration risk argument makes this worse. The compression of oil producer multiples between 2019 and 2021 was not a clean market signal about duration risk. It was the collision of a global pandemic, ESG-driven capital withdrawals, an OPEC supply war that briefly pushed oil prices negative, and a commodity dislocation that made any free cash flow projection unreliable. Using that period as evidence that markets rationally price duration risk is analytically weak. Using it in a piece that then recommends those same energy assets as the AI disruption refuge is internally inconsistent.
Humans Are Not Rational Actors
This is the assumption the piece relies on most heavily and examines least carefully. The entire repricing scenario, the 75 percent drawdown, the simultaneous collapse of terminal values across the economy, the orderly rotation of capital into physical assets, requires markets to process a complex, ambiguous, multi-year technological signal and respond coherently, consistently, and at the same time. That is not how markets work.
Markets are driven by sentiment, narrative, panic, and the particular psychology of whoever is holding the most leverage at a given moment. The sectors most exposed to AI disruption will get overpriced on optimism before they get repriced on reality. Others will be written off too early and recover. Some will be disrupted by AI in ways nobody anticipated and others will survive disruption that seemed certain. The idea that capital markets process AI's impact on terminal value cleanly and simultaneously across the entire S&P 500 is a modeling assumption, not a description of how humans actually behave.
The mispricing created by irrational human responses to AI is not a problem. It is where the alpha lives. The investor who keeps their head while everyone else is either euphoric or terrified is the one who generates excess returns.
We already have evidence of this dynamic. In early 2025, DeepSeek released a model that appeared to match frontier AI capabilities at a fraction of the cost. Markets panicked. NVIDIA lost nearly 600 billion dollars in market cap in a single day. Within weeks, most of that had recovered. Every major hyperscaler reaffirmed their capital expenditure guidance without missing a beat. The market had an irrational response to ambiguous information, corrected, and moved on. That is not the behavior of a system rationally pricing duration risk. That is humans being human.
The people who bought quality AI infrastructure names during the panic generated real returns. The people who sold in the chaos locked in real losses. Nothing about that outcome required a sophisticated terminal value model. It required staying calm when others did not.
Automation bias compounds this further. My earlier work on this, which examines AI in financial risk management directly, documents that 78 percent of organizations report AI adoption outpacing their ability to manage the associated risks, and only 19 percent have fully implemented governance frameworks.⁴ ⁵ When AI models fail, as they do, at a 41 percent hallucination rate on finance-related queries and catastrophically during regime changes like March 2020, the humans watching them have already started deferring to the output.⁶ The repricing Chamath fears may be driven not by rational duration analysis but by humans trusting AI models that are themselves wrong.
The Government Will Intervene, and Not Helpfully
Chamath's piece treats the political environment as a constant. It is not. The entire framework assumes technology velocity and capital markets are the dominant variables shaping AI's trajectory. Political decisions on antitrust, regulation, taxation, trade, and industrial policy are treated as footnotes. They are the story.
The current US federal posture is explicitly pro-deregulation on AI, framing its policy as sustaining American dominance through a minimally burdensome national framework. That posture is real and consequential today. It is also one election away from reversal. A different administration with different priorities on antitrust enforcement, data privacy, or algorithmic accountability would alter the deployment timeline in exactly the high-stakes sectors, credit, healthcare, hiring, housing, where AI disruption would have to actually occur for the terminal value thesis to hold. Those sectors are also the most politically sensitive.
Chamath frames sovereign capital filling the void as a vindication of state capitalism. I read it differently. What you actually get is not an orderly handoff from private capital to patient sovereign capital. You get protectionism. You get competing industrial policies that balkanize supply chains. You get three incompatible AI ecosystems, American, Chinese, and European, developing at different speeds with different standards and different deployment rules. The disruption the piece fears becomes geographically uneven and significantly slower everywhere.
This is not state capitalism getting vindicated. This is the beginning of a managed economy for strategic technology. Managed economies do not optimize for disruption. They optimize for stability, control, and national advantage. Sovereign capital does not disrupt its own economy. It protects it.
The last time nation-states decided a technology was too strategic to leave to markets was nuclear energy. The result was a technology frozen in geopolitical amber for seventy years, overregulated, under-deployed, and shaped almost entirely by military and political considerations. We are only now, under pressure from climate and energy scarcity, beginning to recover its civilian potential. If AI follows the same trajectory, the terminal value question becomes almost irrelevant. The more consequential question is whether the technology's transformative potential gets filtered through the priorities of governments rather than the demands of markets.
Access to a Tool Is Not Mastery of a Domain
Even setting aside the infrastructure constraint, the irrationality of markets, and the political variable, the thesis has a deeper problem. It treats AI as a universal solvent that dissolves competitive advantage. If everyone has AI, no one can build a moat around it. This is true of the tool. It is not true of what you do with the tool.
The spreadsheet did not kill finance. It eliminated people doing manual calculations and created enormous advantage for those who understood what to model and why. The Bloomberg terminal did not flatten investing. Everyone had the same data. Alpha came from the frameworks applied to that data, the questions asked, the patterns recognized. The internet did not kill retail. It killed undifferentiated retail.
When AI becomes universally accessible it becomes infrastructure, like electricity or logistics software. The moat then migrates to whoever has the proprietary data, the domain judgment, the customer relationships, and the operational depth to deploy that infrastructure better than anyone else. Those advantages do not compress under AI. They compound, because AI makes the gap between a skilled operator and a mediocre one wider, not narrower.
Chamath's disruption probability of 20 to 30 percent per year assumes the disruptor has a meaningful structural advantage over the incumbent. But if both the disruptor and the incumbent have equal access to the same AI tools, the incumbent's existing distribution, regulatory positioning, and proprietary data become more valuable, not less. The terminal value question then inverts: it concentrates in businesses with irreplaceable inputs that AI cannot replicate, rather than collapsing across the board.
In a world where everyone has AI, execution and nuance are how you generate alpha. That is not a world without moats. It is a world where the moats are harder to see and harder to copy.
Growth Investing Is Not Only a Bet on Tomorrow
The piece argues that growth equity's logic collapses if terminal values compress. This misreads how growth investing actually works. The best growth investments are bets on current trajectory: rate of revenue growth, net revenue retention, payback periods, unit economics already visible in cohort data. These are near-term signals, not decade-long leaps of faith.
Amazon's celebrated reinvestment model worked because each cycle proved itself before the next bet was placed. Books to everything, retail to cloud, logistics to advertising. The market was repeatedly validated, not asked to trust a distant promise. That model would survive a higher discount rate environment because it kept delivering near-term evidence.
On venture capital: many of those pre-revenue billion-dollar valuations never made sense. The 2019 to 2021 vintage of venture pricing was a function of zero interest rates and a market willing to pay for narrative over evidence. The correction that has already occurred, valuations down 60 to 80 percent from peak in many categories, was not AI disrupting venture capital. It was interest rates and basic math reasserting themselves. If AI reduces the cost and time to reach product-market fit, early stage investing actually becomes more functional. Smaller checks. Faster proof points. More honest cycles.
This Is Not a Zero-Sum Game
The piece constructs a world where AI disruption is a fixed pie being redistributed. Every gain is someone else's loss. Economic history is not zero-sum and there is no serious reason to believe AI makes it so.
The internet did not merely redistribute value from old media to new media. It created entirely new categories of value that did not previously exist: the creator economy, the app economy, the gig economy, the cloud economy. None of these were legible as economic concepts before the technology existed. Electricity did not replace gas lighting and stop there. It enabled manufacturing scale, refrigeration, telecommunications, and computing.
As long as there are humans, there will be innovation. Innovation has never been primarily a capital markets story. It is a human story, driven by curiosity, competition, and necessity. Capital markets fund innovation at scale. They do not originate it and they cannot stop it. What Chamath describes in his capital rotation section is a shift in who captures the returns from innovation, not a slowdown in innovation itself. Those are very different claims.
The Hyperscalers Are Self-Financing the Disruption
The most elegant part of Chamath's piece is what he calls the central paradox: the companies committing 300 to 500 billion dollars per year to AI infrastructure are doing so on the assumption of durable returns over seven to fifteen years. If markets reprice to 2 to 7x free cash flow, that capex becomes unfinanceable and the disruption engine disrupts itself.
This is wrong about who is actually funding the buildout. Microsoft, Google, Meta, and Amazon are not funding AI infrastructure through equity issuance premised on terminal value assumptions. They are funding it from current operating cash flows. Google generated over 70 billion dollars in free cash flow in 2024. Meta generated over 50 billion.⁷ A compressed equity multiple does not change their capacity to keep building.
The competitive logic makes stopping individually irrational regardless of market conditions. If one hyperscaler pauses and a competitor does not, the gap compounds permanently. That is a prisoner's dilemma, and the dominant strategy is to keep investing. The disruption engine does not disrupt itself. It consolidates around entities with balance sheets large enough to survive uncertainty, which makes it more concentrated and ultimately more powerful.
The Long Cycle Is Actually a Good Thing
I want to be clear about one thing: I do not think the transition Chamath describes is necessarily bad. I think it is probably good, and I think it is going to take longer than he implies.
His own final section quietly concedes most of what precedes it. The most likely outcome, he acknowledges, is not a permanent new regime but an oscillating transition: shorter cycles, fatter tails, higher volatility, followed by recoveries when AI development stalls or consolidates. A world where the equity risk premium structurally rises and the comfortable long-run upward drift of equity markets becomes something you earn rather than assume.
That concession should be the organizing frame for the entire piece, not a footnote. Because if the transition happens over a long cycle, which all historical evidence suggests it will, then almost none of the dramatic repricing scenarios described earlier materialize the way the piece presents them. Long cycles give institutions time to adapt, corporations time to restructure capital allocation, workers across different generations time to reskill and renegotiate, and the energy infrastructure time to catch up.
Markets that you have to earn rather than assume are healthier markets. The discipline that comes back when discount rates rise and easy money dries up is not a problem. It is the system correcting itself.
Every major technological transition has followed this oscillating pattern. The railroad boom and bust of the 1840s. The electrification cycle from the 1880s through the 1920s. The internet bubble and the far larger value creation that followed in the decade after. In each case the transition was volatile and uncertain to the people living through it. Across the full cycle it was unambiguously net positive.
The people who invested in railroads at the peak of the bubble lost money. The people who invested in railroad-enabled businesses across the full cycle created generational wealth. The pattern will repeat. The names of the businesses will be different.
What This Means for How You Invest
Chamath is right that AI will reprice some industries. Businesses whose value depends entirely on an undifferentiated position in a world where AI can replicate their core function cheaply deserve to be repriced. This is not a catastrophe. It is discipline reasserting itself.
He is wrong that this dynamic generalizes across the entire economy simultaneously, that it destroys the logic of long-duration investment, that it makes innovation capital-dependent rather than human-driven, and that it produces a zero-sum redistribution rather than net new value creation.
In a world where AI is infrastructure rather than a moat, the question for every business becomes: what do you have that AI cannot replicate? Proprietary data accumulated over years of customer relationships. Regulatory approvals that took a decade to earn. Operational nuance that lives in the people and processes of an institution. Distribution built through trust rather than technology. Domain judgment developed through multiple credit cycles and regime changes.
These are the inputs that compound in an AI-saturated environment. The terminal value does not collapse across the board. It concentrates. And the investors who understand where it concentrates, and why, will generate the kind of alpha that passive exposure to a broad index will not. The same is true for business owners deciding how to deploy AI in their own operations. The question is the same: what do you have that AI cannot replicate, and how do you build more of it?
The alpha does not come from the tool. It comes from the judgment applied to it. That is true whether you are managing risk inside a bank, allocating capital across a portfolio, or running a business. The tool changes. The principle does not.
Be intentional about thinking. Question the model. Understand the assumptions behind the thesis. Know what the data cannot tell you. In a world where everyone has AI and most people stop there, that discipline is the edge.
Source Notes
1. Goldman Sachs Research, AI Data Center Power Demand, 2024.
2. RAND Corporation, AI's Power Requirements Under Exponential Growth, 2025.
3. McKinsey & Company, AI Power: Expanding Data Center Capacity to Meet Growing Demand, 2024.
4. Moody's AI in Risk Study, 2024 -- 53% adoption rate among risk and compliance professionals.
5. KPMG Future of Risk Survey, 2024 -- 78% of leaders report AI adoption outpacing risk management.
6. AI hallucination rate on finance queries: multiple industry studies, 2024-2025.
7. Google Alphabet 2024 Annual Report. Meta Platforms 2024 Annual Report.
About the Author
Tamika Tyson is the Founder and Managing Partner of SCALE, where she helps business owners break through barriers, scale their companies, and maximize value. She has spent twenty years in risk management, credit analysis, and financial infrastructure, serving as Global Head of Credit at a major energy company and leading risk teams through some of the most volatile periods in financial markets.
CONTINUE THE CONVERSATION
Explore more thinking on AI, judgment, and decision-making. Connect with Tamika directly.