MORE POWER!
Lately I’ve been watching the 90s sitcom Home Improvement with my daughter. Partly out of nostalgia, partly because I miss story-driven, low-fi shows where no one has magic powers or is trying to save the universe, and the problems are mostly domestic, interpersonal, and self-inflicted. The stakes are modest, the lessons relevant (if sometimes outdated), and the chaos almost always traces back to one familiar impulse.
MORE POWER.
Tim “The Tool Man” Taylor believed — deeply — that if something wasn’t working, the answer was more. Louder tools with bigger engines and heavier materials. Restraint NEVER entered the picture. More More More.
Watching it now, it’s hard to miss the pattern or the parallel: an unwavering faith that brute force will eventually produce a better outcome, even when the thing in question might not need it.
THE SAME INSTINCT, SCALED UP.
The consequences of this mindset are not abstract.
They’re material, political, environmental, and especially notable when people like Yann LeCun and Gary Marcus, who rarely agree on much, are converging on the same basic point: that large language models, as currently constructed, will not get us where The AI Men keep saying we’re going (AGI). And all of this is unfolding alongside efforts to roll back clean air and water protections, open federal lands, and fast-track data centers that many communities do not want, while the growing demand for rare earth minerals and fossil fuels increasingly bleeds into international policy and geopolitics.
At the same time, there are moves to preempt or punish state-level AI regulation by withholding funds, even as states try to protect citizens and rein in clearly harmful practices by companies deploying AI systems at speed and scale. (This is already happening here in California.) Taken together, this stops looking like a debate about model performance and starts looking like a story about extraction, concentration, and who bears the cost of “progress.”
WHEN MORE YIELDS LESS
I’m not an AI researcher. But I do spend time reading what researchers and critics are saying, and the throughline is fairly consistent: adding more power isn’t leading to better results in the way many assume (hope) it will. As fresh, high-quality data becomes harder to come by, systems increasingly end up recycling what they’ve already produced, and the overall quality starts to slip.
Some of this can be improved with smaller, more focused systems (i.e. tools built for specific uses, trained with care and limits in mind). But some of these constraints aren’t technical problems you can brute-force your way through. More scale doesn’t resolve them.
NO RERUNS, PLEASE
Which brings me back to Tim.
No matter how often Jill or Al tried to intervene, Tim’s confidence in more power usually won. The outcome was predictable: a tool gone rogue, a structure collapsing, another trip to the ER where Tim was known by name. The humor worked because the audience could see the outcome long before Tim ever did.
AI already is, and will continue to be, consequential for society. There are places where it adds real value: narrow domains, well-defined problems, contexts where augmentation matters more than replacement. But that future doesn’t require everything, everywhere, all at once, and it doesn’t require treating power as a substitute for judgment.
That future doesn’t require everything, everywhere, all at once.
The opportunity in front of us is not to slow down out of fear, nor to speed up out of bravado, but to decide deliberately where AI belongs, what it’s for, and what it should never be asked to do. That kind of future doesn’t come from turning the dial harder. It comes from more people exercising discernment, shaping constraints, and choosing purpose over sheer force.