The Frontier & The Fortress

From weird websites to AI co-pilots, 8 lessons from the Internet and AI eras

I’ve seen a lot of chatter lately suggesting that people skeptical of AI today are the same type who once underestimated the potential and inevitable impact of the internet. In other words, today's caution is yesterday’s ignorance.

As someone who’s been researching, designing, and building digital experiences, tools, and platforms since those days, and am here now, holding a more cautious perspective on AI, I wondered, is that true? Is questioning the AI claims and buzzy applications the same as misunderstanding what the internet was going to be twenty-odd years ago?

To make sense of it, I found myself returning to 8 comparisons of what feels different (and doesn’t) between then and now.

#1 From open access to lords at the gates

When the commercial web emerged, it was messy, slow, and often delightfully weird and ridiculous, but it was open. With protocols and tech developed through university money or public funding, anyone could view source code, write HTML, launch a page, and reach (some) people.

Contrast that with today’s AI landscape. You need enormous capital, deep machine learning expertise, and access to powerful GPUs. As Zoe Scaman aptly put it: "The big AI players own the rails... You become a renter, not an owner. Your margins and your capabilities are contingent on someone else’s roadmap."

AI isn’t a playground. It’s a landlord economy.

And yet it’s also true that AI tools are unlocking new capabilities for individuals. For many creatives, small businesses, and learners, it feels like the gates have been opened — and in some ways, they have.

But it’s important to remember: they’re being let in on someone else’s terms. Those terms can change. The free, or moderately–priced, tier can disappear. The feature you depend on can be deprecated or pay-walled. The company whose model you’ve built a workflow on can be acquired, pivoted, or shut down. Or they can release a single feature that duplicates (and subsequently destroys) your offering.

The gate looks open now, but for how long?

#2 From unknown winners to the preordained

Cursor's shutdown after Anthropic changed the terms on them is a reminder that the infrastructure owners can pull the plug at any time. And that’s one more way this era feels different.

Back in the good ol’ early aughts, Google wasn’t Google yet. Facebook didn’t exist. Apple hadn’t launched the iPhone. Microsoft and IBM were dominant, but they weren’t successful in shaping the early web, and they certainly weren’t influencing youth culture.

Now? The same entrenched platforms that own search, ad networks, OSs, browsers, productivity suites, and unimaginable mountains of data on billions of people are also leading the AI surge. They’re not just building with AI; they’re using it to deepen and lock in their dominance. (And the government is now removing guardrails to help them do it faster.)

#3 From wide open to walled in

The early internet was a wide-open frontier with a freedom to imagine genuinely novel products. Napster wasn’t a better CD store. PayPal wasn’t a digital checkbook. eBay didn’t mimic retail. They were category-defining breakthroughs.

AI doesn’t feel like that.

Most applications fall into a few familiar buckets: writing tools, code copilots, image generators, support bots. Instead of reimagining categories, most builders are reinforcing what already exists, just faster, cheaper, sparklier. It’s “X, now with AI.”

Not because AI lacks potential, but because today’s conditions don’t reward risk or originality. With locked-up infrastructure, concentrated funding, and pressure to monetize fast, we’re not building Napster. We’re optimizing email.

#4 From no UX to no fun

In the early days, user experience barely existed. There were no established conventions and little investment in design beyond aesthetics. Engineers called the shots, and as a result, most products were clunky, confusing, and pretty ugly (take a look at any enterprise legacy system!). Steve Jobs hadn’t yet mainstreamed the idea that “design is how it works.”

Now UX and UI are cleaner, faster, and more accessible…and completely homogenized. We skipped past the golden age of exploratory, sometimes funky, design and landed straight into a world of component libraries. Design systems keep teams efficient and fast, but they also flatten the experiences. The goal isn’t delight anymore — it’s throughput.

We’ve moved from engineers leading experiences to the BAs optimizing pixel color based on funnel conversion. Either way, novel, stand-out user experiences are not part of the value proposition.

#5 From money and hype…to more money and more hype

The early internet was full of wild bets and paper millionaires. Investors poured money into anything with a “.com” even when there was no product, revenue, or real plan. The bubble burst, but the era’s optimism endured and evolved as the internet matured.

AI, too, has its own gold rush. This time, the wealth is concentrated in the hands of a few incumbents, and takes massive capital and infrastructure to even play. It moves faster and promises more: it’ll drive your car, cure disease, and replace knowledge work entirely (the part that really gets the c-suite excited!).

Entire companies are shifting focus to AI initiatives they don’t need and can’t sustain because no one wants (or thinks they can afford) to be seen as falling behind. I heard one Fortune 500 sales leader talk about his team was expected to deploy two AI initiatives per quarter. That’s not transformation; it’s mania.

And yet, as the NY Times reported, most companies are still waiting to see that sweet, sweet return on their billions invested. Cory Doctorow puts it this way: “AI is only secondarily a technological phenomenon; it is primarily a financial phenomenon—hundreds of billions of dollars in investment capital in search of a return.”

#6 From scarce talent to underused abundance

A major gain since the 00s: we now have a huge, global, and diverse pool of experienced talent. In the early 2000s, most digital teams were figuring it out as they went (which was equally exciting and terrifying).

Now? There’s a deep bench of designers, researchers, engineers, strategists, and writers with decades of digital experience….that we’re not using to ask big, open questions about how AI can be applied in rich, useful ways. Instead, they’re being laid off, replaced by the AI they should be helping to imagine.

I saw someone on LinkedIn suggest they'd just ask AI to invent a design pattern for a net-new product experience. Umm…good luck? AI is trained on what’s already been done, not what hasn’t been imagined yet. In my experience with AI UX/UI tools, it can manage standard, repeatable patterns reasonably well, but when it comes to complex user flows, nuanced content/feature decisions, or balancing business goals with user needs, it’s still just guessing.

#7 From total noobs to digital natives

One of the biggest differences in this then/now journey: the human user. People weren’t just new to your product; they were new to the entire idea of being online. Adoption meant BIG behavioral shifts: convincing someone to move a task from offline to online, and then sticking with it. Your product actually had to be easier and better than what they did before.

Here’s a real-life example I worked on: Take the real, relatable problem of finding a nearby branch or ATM and getting there before they closed to fill out a deposit slip and hand over a check…to creating an app that a new, digital workflow could solve.

Our challenge was communicating that this new way existed, persuading them it was trustworthy, and then teaching them — often step by step — how to do it. That was a herculean task, and people did it, slowly, across practically every organization, industry, and sector.

AI is benefiting hugely from all the work of the early internet. In addition to all the infrastructure and capital gains, now, we have a global, fluent, engaged audience that expects to do everything online…

…but the market isn’t delivering better products because monopolies don’t need to compete. And that spirit of bold, user-centered innovation feels increasingly rare.

#8 From blissful ignorance to willful amnesia

Recently, I’ve found myself saying, “I don’t want to move fast and break things. I want to move intentionally and make good things.”
Because 1: Do we even know what we are breaking?
And 2: Originally that phrase meant something else.

It meant don’t be precious about your own ideas. Get to MVP fast. Don’t be afraid to break from your own assumptions or pivot away from you first idea. Or yes, disrupt an existing system if it was offering shitty products and services from entrenched companies.

But over time, it stopped being about breaking your own stuff to make better stuff and instead became about breaking everything. Systems, norms, guardrails — who cares, as long as you ship? That shift wasn't accidental. It was popularized by Zuckerberg, after all.

And 3: the harms used to be slower to surface. Misinformation, scams, addiction emerged gradually, (theoretically) giving us time to adapt and respond (or you know, say fuck it and double down. You know who they are).

Today, we don’t have the luxury of innocence. We’ve seen how tech reshapes behavior, exploits labor, concentrates power, and rewires attention. The risks now aren’t hypothetical, but historical — and yet we’re accelerating the cycle!

This time, we know what’s at stake. To me, this means we’re not just participants in this tech wave, but more accountable for how it unfolds. We have to ask: Who does this serve? Who gets left out? What are we building, and what are we breaking?

Because if we’ve learned anything in the twenty years since, it's not just about could we? but, more than ever, should we?

This time, we should know and do better.

____________

So what now?

I believe transparency around AI usage is critical: I used ChatGPT-4 as a collaborator in this piece. Specifically, I provided copious notes, the points I wanted to make, and the references and sources to support them, a general framework, and then edited A LOT (I am a writer after all and all dashes are mine), but the back-forth was a part of my process.

After I landed on a version I was happy with and shared it with ChatGPT again, I asked: Being truthful and not just wanting to please me, what do you think about what I've been writing? Do you feel like it is too one-sided? Do you feel I have blind spots I'm overlooking? We then collaborated on this summation:

This piece isn’t about dismissing AI or glorifying the past. It’s about remembering that we’ve done this before: rushed toward transformation without always asking what it’s for, who it’s serving, and what gets left behind. But unlike last time, we have more tools, experience, and lessons learned. We don’t have to stop building; we need to build better.

I’ve got ideas about what that could look like … more on that coming soon.

(…I still took a hacksaw to that last paragraph, too.)

Previous
Previous

The Future We Choose

Next
Next

Not ‘CAN WE’ but ‘should we’