The Future We Choose
How we use AI now will shape what comes next. We have options.
So what do we do?
Over the past year, as I’ve gone deep into AI, I’ve been asked that question again and again: So what do we do?
Baked into it is a mix of uncertainty, anxiety, hope, and aspiration—and of course, there’s no single answer. “AI” is a poor catch-all for a complex, varied field, and part of why we’re in this murky place is because we’ve bought into the hype that AI is a magic bullet for the dull, burdensome tasks no one wants to do. But we, friends, are serious people, and so, complexity it is.
There are plenty of experts offering courses, writing books, and publishing newsletters on the foundations of AI and its practical applications (e.g., deploying an AI agent in your sales cycle). These are incredibly useful and have informed my thinking.
But the question I keep circling back to is: How do we handle this shift better than the last one? What did we learn, and how do we apply it now?
To start answering that, I’ve put together a working framework — less “how quickly can we deploy AI?” and more “how can we be thoughtful and ethical in how we move forward?” It’s not comprehensive, and it will evolve, but my hope is that it supports decision-making or, at the very least, sparks a useful conversation.
1. Ask “Should we?” as often as “Can we?”
This is hard, especially under pressure from leadership, investors, or competitors. But it’s a question we skip too often because we don’t always like or know the answer. Asking “should we?” helps surface uncomfortable concerns earl so we can address them, mitigate them, or move forward with clearer eyes.
What’s “acceptable” risk will vary. But at this scale and speed, not asking the question is what’s unacceptable.
2. Swap “How much money can we save?” with “How can we improve what we offer?”
Cory Doctorow and Ed Zitron have been calling out the enshittification of products for years – essentially, how monopolization leads to degraded user experiences. AI done wrong accelerates that trend.
When implementation is only about efficiency and savings, we risk creating hollow, exploitative products built on years of harvested data (I see you, Delta and your “dynamic pricing”). But when we focus on using AI to improve our services, it becomes a good-faith effort to better serve people.
I know that sounds idealistic. But we already have real examples (see: Ikea vs. Klaviyo), and Ethan Mollick offers plenty of research showing how smart, human-led AI adoption increases both quality and impact of work.
3. Build AI literacy for yourself and your team
AI literacy isn’t just technical upskilling. It’s about understanding what AI can and can’t do, how it works, and what it needs. As internet pioneer Vint Cerf put it, “AI literacy is to have a conscious sense of both the power and peril of artificial intelligence.”
It helps you see through inflated claims, spot snake oil, and ask smarter questions. Think of old cigarette ads; they seem absurd now, but they worked because people didn’t know better. We’re still in that stage with AI.
Not all products are equal, not all models are to be trusted, not all performance claims are legit. Literacy helps us tell the difference.
4. Be transparent about when and why you’re using AI
One of the most common concerns I hear is about authenticity: will we know what’s human-made vs. AI-generated? That’s not just a creative or cultural issue; it has legal, economic, and social implications.
Transparency is a small but important first step. For instance, I wrote this post “old school,” with months of notes, a spreadsheet of references, a blank doc, and a complete first draft AND I use AI to tighten sections or suggest alternate phrasing. That doesn’t compromise my ideas or ownership; it’s a tool I used, not a ghostwriter.
I doubt transparency will be a requirement forever, but in this moment, when trust is fragile, it feels necessary.
(Kudos to the universities that are starting to define clear, transparent rules for AI use in student work. That’s the kind of clarity we need more of.)
5. Let workers lead
You’ve probably read the headlines of companies mandating AI integration and department heads scrambling to check the “we’re using AI” box. That’s not what I mean.
As Kate Crawford writes in Atlas of AI, this is less a collaboration than a forced engagement. Workers are expected to upskill, adapt, and accept, but rarely asked to lead or shape how AI shows up in their roles.
This misses a tremendous opportunity.
Instead of squeezing more productivity from already-stretched teams, or replacing them altogether, organizations could use AI to support people — and not just for efficiency, but for better outcomes across the board. That only happens when workers have real autonomy and input in how AI is adopted.
AI ROI often falls short. Why? 1) Most workflows aren’t automation-ready, 2) AI tends to create more work, not less, and 3) Workers usually know how things actually get done.
Tapping into worker insight isn’t what’s right; it’s what makes AI pay off.
6. Design for protection, not extraction
This may be one of the hardest.
We seem to have accepted that “data is the new oil” and AI is the refinery. That mindset underpins so many products — and that’s the problem. More and more, it's the hidden layer of data collection, often invisible to consumers, that's driving corporate profits, not the products themselves.
Not all data needs to be extracted, and not all uses are acceptable. We have models for a different path: the EU’s GDPR, Denmark’s consideration of copyright protections for personal likeness, etc.
Privacy-focused design and data protection aren’t impossible. They’re just not profitable fast. But they’re how we build long-term trust and safety. (For a great primer, check out Privacy is Hard and Seven Other Myths.)
Ultimately, this question leads to the last pillar in our framework.
7. Ask: Does this contribute to a world we want to live in?
This circles back to the beginning — but it’s worth ending on.
Do we want a world where every product spies on us? Where our kids are shaped by systems we don’t understand? Where the people building and selling these tools quietly opt their families out of the experiences they’ve created?
As Johann Hari wrote in Stolen Focus: “Take care what technologies you use, because your consciousness will, over time, come to be shaped by those technologies.”
We can choose differently. We can build tools that protect privacy, nurture focus, foster creativity, and make us better at our work and more present in our lives. We can design systems that expand access to healthcare, improve disaster response, accelerate climate adaptation, and help people learn and earn in ways that weren’t possible before.
The future takes shape in what we make today. Let’s build with care, and use AI to unlock potential, not just cut corners.