2025-11-18
Bubbles don’t just misprice assets; they quietly delete futures from the collective brain.
Right now, the late-stage AI trade is behaving as if two things are impossible:
- that open source plus new architectures flip the table, and
- that we look back and admit we overbuilt GPU farms for workloads that could have been done cheaper, smaller, differently.
That’s not sober analysis. That’s a refusal to imagine anything outside the current stack.
Non-zero is all that matters
The real probability here isn’t 0%. The moment you admit it’s even 1–5%, the entire story tilts.
If there’s even a 1% world where we discover:
- “We’ve been digging in the wrong place,”
- GPU-bound hyperscale was a clumsy interim tech,
- margins compress and a chunk of capex turns into stranded concrete and metal,
then the way GPUs, AI infra, and proprietary LLM platforms are priced today is absurd. Late stage means tail risk doesn’t fade; it gets sharper. Your entry multiples are at their most fragile.
Yet markets are trading as if that tail simply does not exist.
Open source is built to flip tables
The risk no one wants to model is synergistic open-source innovation.
We’ve seen versions of this movie:
- UNIX vs. Linux
- Proprietary smartphones vs. Android
- Encarta vs. Wikipedia
The pattern isn’t “open always wins.” The pattern is:
Closed wins early with capital, distribution, and gloss.
Open wins later with ruthless cost discipline, flexibility, and compounding ecosystems.
And in this cycle, the people who hate the current GPU-compute insanity the most are exactly the ones hacking on open stuff:
- indie researchers staring at real cloud bills,
- small teams trying to ship useful products without kneeling to a hyperscaler,
- tinkerers who get physically angry at waste and bottlenecks.
Their incentive is brutally simple:
Make the same intelligence cheaper, faster, less GPU-bound.
That’s the exact psychological profile you want in the room when architectures flip.
A Wikipedia-versus-Encarta moment for AI
The “Wikipedia moment” this time won’t be a press release. It’ll be a demo.
Picture a rare genius—or a tiny, sleep-deprived team—standing up with something that’s:
- 5–10x more efficient,
- not chained to NVIDIA-style GPUs,
- “good enough” on real tasks that no sane CFO can ignore it.
It doesn’t have to be pretty. It just has to be real.
The shockwave writes itself:
- Hyperscalers quietly pause GPU capex to re-run the math.
- VC and internal budgets start pivoting toward “this new thing.”
- Engineers and customers begin asking,
“Do we really want to lock ourselves into the old stack for another 5–10 years?”
You don’t need a revolution; you just need credible partial substitution to blow a hole in “GPU forever.”
Once that happens:
- long-run utilization and pricing expectations get marked down,
- AI capex ROIC looks worse on any honest spreadsheet,
- equity multiples priced on a decade of smooth dominance start to look deranged.
In mid-stage bubbles, a 1% scenario is cute sci-fi.
In late-stage bubbles—stretched valuations, credit tied in, everyone levered to the same story—that 1% is a ticking bomb in the footnotes.
Humans optimize; bubbles don’t
As a species, we drift toward optimization:
- more capability per watt and per dollar,
- less fragile, less concentrated infrastructure,
- intelligence moving toward the edge, not locked in a few GPU temples.
Put it this way:
- On one side: a bottlenecked, over-stretched, overpriced GPU-bound AI stack.
- On the other: a cheaper, faster, more efficient, less GPU-dependent AI world.
We already know which future humans prefer—and, over time, tend to build.
This current regime—hyperscale farms, power crunch, “AI as a luxury compute toy”—looks less like an endpoint and more like scaffolding. Nobody keeps the scaffolding forever. And nobody should be paying blue-chip multiples for the scaffolding company exactly when the architects start redrawing the plans.
Capital allocation when the probability isn’t zero
So what do you actually do with this as an investor?
You don’t need to:
- predict when the Wikipedia moment hits,
- know who triggers it,
- or guess which open-source stack or new chip wins.
You just have to stay honest:
- The probability is not zero.
- The downside to current GPU / hyperscaler / proprietary AI valuations if that world shows up is enormous.
- The upside from buying now, at these prices, before that risk is resolved, is not nearly rich enough to compensate.
That’s the entire decision tree.
You can respect the builders. You can admire Jensen as an operator. You can use the products and love the progress and still say:
“I’m not locking my capital into a late-stage, GPU-bound stack that pretends this is our final form of AI while open source and new architectures are still rolling dice in the background.”
In that light, cash plus optionality stops looking cowardly and starts looking surgical:
- If the “Encarta wins forever” world somehow does arrive, there will still be chances to buy after the inevitable disappointments and shakeouts.
- If the Wikipedia moment shows up—open, efficient, less GPU-bound—you didn’t finance the wrong mine. You kept dry powder for the right one.
Right now, the market is pricing this as if the map is accurate and Eldorado is guaranteed.
All I’m saying is:
I’m not betting my life savings on a map while reality is still rolling dice.