WHAT IF AI IS A BUBBLE?
Lately, amid the usual silver-bullet promises about AI, a new kind of headline has started slipping into the feed:
What if this is a bubble?
Honestly, it would be more shocking if it weren’t.
For years now, many have been pointing to the same math: you can’t pour billions of dollars into a space with a ~20% success rate or watch the fishy financial arrangements and pretend it’s normal. No executive team would greenlight that anywhere else. Yet here we are, acting surprised that the ground feels a little unstable.
We’ve been in an AI spring: a familiar mix of exuberance, over-investment, distorted expectations, and a collective willingness to believe this time would be different. Now the temperature is shifting, and people are wondering what happens if (when) the air finally goes out of the balloon.
But a bubble isn’t the same as a bust.
And it doesn’t mean AI is hollow.
What it does mean is that we may have bet too heavily, too narrowly, and too impatiently on one particular branch of the AI tree. An enormous share of recent funding — both private and academic — has been funneled almost exclusively into LLMs, as if they represent the entire field rather than one (impressive, but limited) technique. They don’t. And expecting them to carry the weight of every promised transformation was always unrealistic.
If a correction comes, it shouldn’t be a judgment on “AI” as a whole, but on:
hype instead of strategy
scale over substance
capabilities without context
solutions without problems
value propositions no one ever stopped to articulate
And on our collective habit of treating technology as destiny rather than as a series of design, governance, and purpose decisions.
When bubbles burst, they tend to leave something useful behind: clarity. The real work becomes visible again.
That work includes the questions we should have been asking from the beginning:
Who is this for?
What problem does it actually solve?
Who benefits?
Who’s exposed to risk?
What values are we optimizing for?
Where do we insist on human judgment?
What boundaries matter?
These aren’t barriers to innovation. They’re the conditions that make durable innovation possible.
If this is a bubble, the answer isn't to throw the whole field out with it. It’s to rebuild with more honesty, more literacy, and more alignment around what “good” looks like for people, not just for markets.
AI was never going to reshape the world on hype alone, and bubbles don’t erase the need for thoughtful, human-centered decision-making.
If anything, they make that need more indispensable than ever.