C.W.K.
← EssaysMarket

Cash in an AI Bubble and Why I'm Sitting This One Out

2025-11-13

If you strip away all the noise—PLTR memes, NVDA worship, Cathie screaming price targets on TV—what we're living through is actually pretty simple:

We've plugged immature AI into an immature species and handed it mission-critical systems… then convinced ourselves we've reached the sci-fi endgame.

We absolutely have not.


We've seen bubbles before—but not like this

The dot-com boom and the GFC were human-driven bubbles:

When things broke, you could still:

It was chaos, but it was human chaos. The feedback loop was:

Trader → screen → news → trader again.

Now?

The inputs, stories, and risk rules are increasingly AI-shaped:

Humans sit on top of multi-layered black boxes and think they're in control because the output "sounds smart."

The dot-com bubble was written by overconfident humans with dumb tools.
This one is written by overconfident humans with smart-ish tools they barely understand.


LLMs ≠ AGI, but people are acting like they are

By any serious standard, frontier LLMs are:

They are not:

They are very good autocomplete plus tools. That's it.

And yet:

They're building market-relevant black boxes off prompts like "optimize this" and "rewrite this to sound more professional."

So no, it's not AGI. But it is enough intelligence + authority to steer behavior in ways humans no longer track line-by-line.

That's what makes today's market a different kind of unknown territory.


AI consensus isn't magic—it's simple math with shared blind spots

Ask any frontier model a clean, math-based question and they mostly converge:

They agree not because they're secretly AGI, but because:

Now imagine that consensus leaking into:

You don't need some evil Skynet coordinating a crash.
You only need lots of systems with similar heuristics, all flipping from:

"Buy dip" → "Cut risk"

on the same kind of shock.

That's your AI-shaped "whoosh":
nothing, nothing, nothing—and then everyone leans the same way in one instant. A stampede.


Human nature hasn't changed at all

This is the real joke.

For thousands of years, every wisdom tradition—East and West—has screamed some version of the same warning:

The core is the same:

You are dangerous to yourself when you don't understand your own greed, fear, vanity, and need for vindication.

Now we've layered LLMs on top, and instead of deeper self-knowledge we got:

This is where the trap gets really insidious. Some people—the ones who've actually read the books and done the work—fall into what I call the saint-mindset trap:

But you're human. You do crave those things. Everyone does. Even Burry shutting his fund and coming back as Cassandra Unchained is bleeding that mix of ego, vindication, and war-cry. He's not a saint; he's a smart, scarred primate who finally said, "Screw it—I'll only risk my own money."

Trying to pretend we're beyond all that because we now have LLMs?

That's delusion on top of delusion.


Why I'm in cash and staying there

Given all of this, my stance is brutally logical:

So I choose:

I'm not "missing out." I'm explicitly declining to be:

If the bubble keeps inflating? My cash still funds my life and my learning.

If/when it pops and valuations reset? My cash becomes optionality, not regret.

That's what a real 3-year sleep test looks like.


We're not AI. We're the ones that still need to learn.

The most dangerous fantasy right now isn't "AGI is close."
It's "we can fast-forward past the painful part of learning because we have AI."

History says:

Euphoria → crash and burn → lessons → rebuild stronger.

What we're trying now is:

Euphoria → LLM hype → skip the pain → straight to utopia.

BS.

Even these models don't work that way. They don't keep self-improving alone; their learning stops at the checkpoint humans choose. Real wisdom requires practice, feedback, and correction—習, not just 學.

We haven't done that work yet with AI in markets. We're at the "play with fire and call it progress" stage.

So no, we are not AI. We're still the same messy, status-addicted, easily-fooled humans we've always been—just with bigger levers and prettier interfaces.

Until we accept that—and act like it—cash on the sidelines isn't cowardice.

It's the only honest way to say:

"I know myself well enough not to hand my life to a black box in a bubble I can't model."

The question isn't whether the bubble pops.

It's whether you'll still be standing when it does.