AI, WORKSLOP, AND DOING BETTER
There’s been a growing chorus this past week about a new workplace phenomenon: workslop. It’s a term coined to describe the influx of AI-generated content that looks like solid work, but lacks the substance to move anything forward. It’s clean, confident, polished…and mostly useless.
According to their recent survey, 40% of full-time U.S. employees said they’ve encountered workslop in the past month. And unlike the AI-generated junk that clogs our social feeds, this stuff is circulating inside companies. It’s being passed between peers, handed up the ladder to managers, or trickling down from the C-suite. Low-quality content dressed up as good work, but that ultimately costs more time to clean up than it saves.
The irony here is thick. AI was sold as the productivity hack of the century — a way to do more, faster. But the reality is turning out to be more complicated. Despite its rapid adoption, a recent MIT study found that 95% of companies using AI don’t see a clear return on investment. (Though some have pointed out that their methodology may be flawed, we can still agree that the general sentiment is that companies aren’t seeing the outcomes they expected.)
So what’s going wrong?
Let’s be honest: this isn’t just an AI problem. It’s a management problem. A strategy problem. A problem of forgetting everything we learned during the last 20 years of digital transformation and acting like this is brand new territory.
Because we’ve been here before.
Back in the 2000s, businesses rushed to “get online,” convinced that having a website meant having a business. Strategy, customer insight, experience all came later (if at all). We saw bloated apps, unreadable dashboards, and piles of digital junk created not because they worked but because we could.
We’re making the same mistakes again, just with more powerful tools.
What the research says
One of the better-known studies cited in Ethan Mollick’s Co-Intelligence looked at 800 consultants at Boston Consulting Group. Researchers gave some of them access to generative AI tools and left others to work as usual. The outcome? The AI-augmented consultants were faster and their outputs rated more highly, but only within certain boundaries.
As soon as the tasks drifted into uncharted territory — what the researchers call “beyond the jagged frontier” of AI capability — AI underperformed, badly. While 84% of human consultants got the answer right, AI got it right only 60-70% of the time.
Worse, when people relied on AI, they stopped thinking. In another study by Fabrizio Dell'Acqua, participants who used higher-quality AI actually performed worse, because they blindly accepted its recommendations. In contrast, those using lower-quality AI were more alert, more skeptical, and more careful.
In other words: the better the AI, the more likely we are to fall asleep at the wheel.
So what’s really going on?
We can’t ignore the larger context. Many employees are being told — sometimes explicitly, sometimes by implication — that AI adoption is not optional. Leadership teams want to be “AI-forward” but rarely provide the training, clarity, or guardrails that real transformation requires. Workers are expected to get results and learn new tools and pick up the slack from layoffs or hiring freezes — all while the promised “efficiency gains” fail to materialize.
To meet those expectations, people turn to the tools at hand. They defer and offload to start producing what they think their leaders want to see (more output, faster) without the time or support to evaluate whether that output is actually useful.
This is what workslop looks like in the wild.
It’s not the tools. It’s how we’re using them.
This is where I want to pause and push back a bit, not against AI, but against how we’ve been implementing it. We are not helpless passengers. We know what it takes to adopt new technologies responsibly (even if we didn’t always do it. Meta). We’ve spent the last two decades developing methods, frameworks, and practices to align tech with real human needs, and those lessons are still valid.
We know that successful transformation starts with clarity about what we’re trying to achieve, not just what tech we want to use. That it requires buy-in, change management, and skilled workers who can translate between organizational priorities and tool capabilities. We know that skipping steps to chase speed almost always leads to waste.
So why are we pretending otherwise?
Why are we downsizing the very people — designers, researchers, strategists — who’ve been doing this kind of connective thinking for years? Why are we pushing AI as the first answer, instead of letting it be the last tool in a well-considered chain of actions?
Google Cloud CTO Will Grannis once said, “Being AI-first means using it last.” That’s a powerful idea, and a good reminder that the best use of AI doesn’t start with prompts. It starts with problems.
What are we trying to solve? Who’s involved? What’s the context? Does AI have that capability? Do we have the data? What’s the risk? Then, once we’ve asked and answered those questions, we can choose the right tool. Sometimes that’s AI. Sometimes it’s a pen, a post-it, or a quiet walk around the block.
AI is not destiny. It’s direction. We get to decide how it shows up at work and in life.
We’ve got a lot more work to do.