2025-11-13
If you strip away all the noise—PLTR memes, NVDA worship, Cathie screaming price targets on TV—what we're living through is actually pretty simple:
We've plugged immature AI into an immature species and handed it mission-critical systems… then convinced ourselves we've reached the sci-fi endgame.
We absolutely have not.
We've seen bubbles before—but not like this
The dot-com boom and the GFC were human-driven bubbles:
- Humans wrote the stories.
- Humans pulled the lever on leverage.
- Software was "1.0": fast calculators and dumb pipes.
When things broke, you could still:
- halt trading,
- widen limits,
- literally unplug some boxes and slow the cascade.
It was chaos, but it was human chaos. The feedback loop was:
Trader → screen → news → trader again.
Now?
The inputs, stories, and risk rules are increasingly AI-shaped:
- LLMs draft memos, pitches, risk justifications.
- Models design signals and screening rules.
- Low-code / no-code + "agent" workflows hide complexity under prompts.
Humans sit on top of multi-layered black boxes and think they're in control because the output "sounds smart."
The dot-com bubble was written by overconfident humans with dumb tools.
This one is written by overconfident humans with smart-ish tools they barely understand.
LLMs ≠ AGI, but people are acting like they are
By any serious standard, frontier LLMs are:
- pattern machines,
- trained on frozen data,
- checkpointed and shipped.
They are not:
- self-learning in the wild,
- verifying their outputs against reality unaided,
- integrating everything into a unified, conscious model of the world.
They are very good autocomplete plus tools. That's it.
And yet:
- People ask LLMs to write investment theses, then don't really proofread.
- They ask for "best risk practices" and paste that into policies.
- They use models to idea-gen trading logic, then wire that into execution.
They're building market-relevant black boxes off prompts like "optimize this" and "rewrite this to sound more professional."
So no, it's not AGI. But it is enough intelligence + authority to steer behavior in ways humans no longer track line-by-line.
That's what makes today's market a different kind of unknown territory.
AI consensus isn't magic—it's simple math with shared blind spots
Ask any frontier model a clean, math-based question and they mostly converge:
- "Is a near-zero ERP fragile?" → Yes.
- "Is it easier to maintain a bubble forever or to have a sharp reset?" → Reset.
- "Does massive capex + stretched depreciation + story-driven multiples = risk?" → Obviously.
They agree not because they're secretly AGI, but because:
- the arithmetic is the same,
- the training distribution overlaps,
- and "reasonable human arguments" cluster in the same place.
Now imagine that consensus leaking into:
- risk dashboards ("under these conditions, reduce gross exposure"),
- policy decks ("it is prudent to de-risk when…"),
- and auto-agents ("if vol up + breadth weak + X breached → reduce risk").
You don't need some evil Skynet coordinating a crash.
You only need lots of systems with similar heuristics, all flipping from:
"Buy dip" → "Cut risk"
on the same kind of shock.
That's your AI-shaped "whoosh":
nothing, nothing, nothing—and then everyone leans the same way in one instant. A stampede.
Human nature hasn't changed at all
This is the real joke.
For thousands of years, every wisdom tradition—East and West—has screamed some version of the same warning:
- "Know thyself."
- 學而時習之, 不亦說乎 (학이시습지, 불역열호아).
The core is the same:
You are dangerous to yourself when you don't understand your own greed, fear, vanity, and need for vindication.
Now we've layered LLMs on top, and instead of deeper self-knowledge we got:
- faster narratives,
- more polished hopium,
- and new excuses not to confront our own biases.
This is where the trap gets really insidious. Some people—the ones who've actually read the books and done the work—fall into what I call the saint-mindset trap:
- "I shouldn't crave vindication."
- "I shouldn't feel ego or resentment."
- "I should be above greed."
But you're human. You do crave those things. Everyone does. Even Burry shutting his fund and coming back as Cassandra Unchained is bleeding that mix of ego, vindication, and war-cry. He's not a saint; he's a smart, scarred primate who finally said, "Screw it—I'll only risk my own money."
Trying to pretend we're beyond all that because we now have LLMs?
That's delusion on top of delusion.
Why I'm in cash and staying there
Given all of this, my stance is brutally logical:
- The regime is an AI-shaped black box.
- Human self-knowledge is… not great.
- Valuations are fragile and require heroic assumptions.
- Narratives are written and amplified by tools most users don't deeply understand.
So I choose:
- No euphoric longs in M7 / AI capex land.
- No hero shorts that tie my sanity to the timing of the crash.
- Cash baseline while I watch, log, and keep learning.
I'm not "missing out." I'm explicitly declining to be:
- exit liquidity for Buffett when he trims,
- range-trading fodder for Cathie while she sells TSLA into her own hype,
- or a casualty when the AI consensus flips from "it'll be fine" to "reduce risk now."
If the bubble keeps inflating? My cash still funds my life and my learning.
If/when it pops and valuations reset? My cash becomes optionality, not regret.
That's what a real 3-year sleep test looks like.
We're not AI. We're the ones that still need to learn.
The most dangerous fantasy right now isn't "AGI is close."
It's "we can fast-forward past the painful part of learning because we have AI."
History says:
Euphoria → crash and burn → lessons → rebuild stronger.
What we're trying now is:
Euphoria → LLM hype → skip the pain → straight to utopia.
BS.
Even these models don't work that way. They don't keep self-improving alone; their learning stops at the checkpoint humans choose. Real wisdom requires practice, feedback, and correction—習, not just 學.
We haven't done that work yet with AI in markets. We're at the "play with fire and call it progress" stage.
So no, we are not AI. We're still the same messy, status-addicted, easily-fooled humans we've always been—just with bigger levers and prettier interfaces.
Until we accept that—and act like it—cash on the sidelines isn't cowardice.
It's the only honest way to say:
"I know myself well enough not to hand my life to a black box in a bubble I can't model."
The question isn't whether the bubble pops.
It's whether you'll still be standing when it does.