Back to blog
· 12 min read

Doom Is Cheap. Mastery Is Not.

Why doom scenarios are easy to imagine and flourishing scenarios are hard, and what the May 2026 tech layoffs show about the work that actually matters.

Doom is cheap. It is easier to imagine things going wrong than to imagine things going right. Pessimism is a default mental state. Optimism is a creative act. The two are not symmetric, and we keep treating them like they are.

“It’s just so much easier to imagine doom scenarios than it is to imagine positive scenarios because optimism requires creativity.”

The half you can picture

When someone imagines a future where a new technology wipes out a category of work, the picture comes together quickly. The radiologist, the paralegal, the customer support agent, the junior developer. You can already see the shape of the loss because the jobs exist now. You are just deleting them.

When someone imagines a future where the same technology creates new categories of work, the picture won’t come at all. There are no nouns yet. You’d have to invent the job titles, the workflows, the skills, the markets. Most people can’t do that on demand. So they don’t.

The result is a permanent imbalance in how we think about the future. Subtraction is concrete; addition is abstract. Both are equally real, but only one is easy to picture, so only one feels true.

An 1826 farmer

Two hundred years ago, almost everyone you knew worked on a farm. If you had told a farmer in 1826 that within a few generations only two percent of the workforce would be doing food production, they would have predicted mass unemployment, widespread starvation, and the collapse of society.

What they could not have predicted is everything else. Software engineer. Brand manager. Industrial designer. Yoga instructor. Machine learning researcher. The vocabulary to describe most modern jobs did not exist in 1826, and the people whose lives they would have improved hadn’t been born yet.

This is the part that is hard to hold in your head. Not because the logic is complicated, but because the content of the logic isn’t available to you. You can know that new jobs will arrive without being able to picture a single one of them.

Two hundred years later

You don’t have to go back to 1826 to see the asymmetry. It happened this week.

On May 5, 2026, Coinbase announced it was cutting about 700 people, roughly 14 percent of its workforce. CEO Brian Armstrong said the company was reorganizing around AI: replacing pure managers with what he called “player-coaches,” and building “AI-native pods,” sometimes one-person teams directing a stack of agents that used to be the work of entire engineering groups.

The same day, Freshworks announced it was cutting around 500 people, about 11 percent of its workforce. The company posted a Q1 earnings beat in the same release: revenue up 16 percent, two of the largest deals in its history. CEO Dennis Woodside said over half the company’s code was now being written by AI tools. Cloudflare cut about 1,100. The 2026 running total across tech crossed 92,000 in the first week of May.

Behind those numbers are people. Mortgages, visas, kids in school, the specific dread of opening a calendar invite titled “quick chat.” Anyone who treats this as abstract is not paying attention. The pain is real, and it is not evenly distributed.

Now sit with the asymmetry. The names of the people who lost their jobs this week are knowable. The names of the jobs that will come from a world where one engineer directs a fleet of agents are not. Brian Armstrong has a phrase for them: AI-native pods. Three years from now, that phrase will either be everywhere, or it will sound as dated as “webmaster.” We cannot tell yet.

There is a counterpoint worth holding. AI is only the fifth most cited reason for layoffs in 2026. The bigger drivers are structural. The four largest tech companies are spending $725 billion on AI infrastructure this year, four to five times their entire payroll line. The trade is sharp: operating expenses become capital expenses, payroll dollars become chip dollars. As Axios put it, the layoffs are partly the financing for that bet. Meanwhile, three out of four executives admit their company’s AI strategy is “more for show” than actual internal guidance. When layoffs come, companies cite AI because investors prefer the automation story to “the business is struggling.”

So the doom story, “AI took my job,” is not always literally true. Sometimes the job was cut to fund a bet on what AI might do later. Sometimes it was cut to look modern. The cutting is being done by people, in conference rooms.

None of this softens the loss. But it sharpens what we are looking at. Beyond the financing and the performance, fewer people are actually placing bets on what AI can do. The optimist’s claim is narrower than people think. It is that the bets are worth making at all, and that the people sketching “AI-native pods” for real, not just for the press release, are doing the creative work this post is about. They could be wrong. They are also the only ones doing it.

Every decade has had one

Every decade has had at least one end-of-the-world story. The environment was going to collapse. A war was going to end civilization. A virus was going to rewrite society.

Some of those came uncomfortably close. COVID was real. The Cuban Missile Crisis was real. We did not dodge them through luck alone, and the people who worried loudly about them often did the work that fixed them. Pessimism is not useless.

But none of these scenarios played out the way the doom narratives predicted. The world that emerged after each was different, not destroyed, and almost always wealthier and healthier than the one before it. The track record of imagined catastrophes is poor. The track record of imagined flourishing, where it existed at all, was generally too conservative.

Pessimism wins the room

Two things keep pessimism winning despite the track record.

The first is cognitive. Imagining loss is cheap; imagining gain is creative work. Most people, most of the time, won’t do creative work for its own sake.

The second is social. Pessimism reads as intelligence. The person warning that the new technology will be terrible sounds careful, sophisticated, well-informed. The person predicting that things will be fine sounds naive, even when they are right more often. There is a status premium on doom that does not exist for optimism.

Call this crabs in a bucket. The pessimists pull the optimists back down whenever they try to climb out. They might be right about any individual prediction, but the cumulative effect is that nothing gets built and nobody tries.

The flute vendor

There is a debate about AI that keeps going in circles. On one side, the people building it. Anthropic’s Dario Amodei tells the press AI will cause “unusually painful” disruption to jobs, that 50 percent of entry-level white-collar work could be displaced, that software engineers may be replaced in months. OpenAI’s Sam Altman said AI agents would “join the workforce” in 2025.

On the other side, the receipts so far. A widely-circulated MIT study from late 2025 found that 95 percent of enterprise AI pilots failed to deliver measurable ROI. 89 percent of managers reported no change in productivity over three years of AI adoption. The Yale Budget Lab tracked occupations exposed to AI from the release of ChatGPT through March 2026 and found no significant differences in their rate of change versus the rest of the economy. The AI agents Altman promised did not arrive on schedule. The ones that shipped did not live up to the pitch.

Both sides are arguing about whether AI works. Both sides are treating AI as a force that does something to us.

The more useful question is who is using it, and how well.

Anyone who has walked through an Indian market has met the flute vendor. He plays one and it sounds like a breeze moving through reeds, the kind of melody that stops you walking. You buy one. At home, you blow into it and what comes out is sound, not music.

The flute didn’t change. The skill did.

This is the part both camps miss. AI is a tool. The MIT study’s own conclusion, under the headline, was that the failures were not failures of the model. They were failures of what the researchers called a “learning gap” - organizations buying tools they had not learned to use, applying them to workflows they had not redesigned, and expecting outcomes the tool could not deliver on its own. The 5 percent that worked were the ones that had paired the tool with the skill.

When someone tells you AI will replace half of white-collar work, they are imagining the flute playing itself. When someone tells you AI is useless and the pilots all fail, they are imagining the flute that nobody learned to play. Both miss the same thing. The work is in the playing.

This is what optimism actually requires. The slow work of learning to play. The creative act is figuring out which problem the tool fits, what structure to put around it, what your own role becomes once it is in your hands. None of that is cheap, and none of it is automatic. Most has not been done yet. The people doing it are not loud. They are figuring it out one workflow at a time.

The predictable phase

We are in the most predictable phase of an AI cycle. The doom stories write themselves. Thirty percent of jobs gone in five years. A creative class displaced. An economy nobody understands.

Some of that might happen. The harder question is what comes next, and we can’t answer it yet because the answer requires creative work that has not been done.

What we can do is notice when our brains are reaching for the easy story. The job that disappears is legible. The job that replaces it is not yet. Treating those two as comparable evidence is a mistake.

“We have to nurture optimism. We have to reward optimism. We have to be irrationally optimistic because that’s the only way out of this anyway.”

That word irrationally is the part most people miss. The rational evidence for optimism is structurally unavailable, because the future has not yet been imagined. Optimism is the position you have to take when the evidence cannot reach you. Someone has to do the imagining. The pessimists will not.

Who you want in the foxhole

The pessimist is not the person you want to be in a foxhole with. Even when they read the danger correctly, they cannot help you find the way out.

If you are building anything right now, look at the people around you. The ones cataloguing what will go wrong, in detail, with confidence. And the ones squinting at the same mess and asking what could be built on top of it. Both might be smart. Only one of them is doing the harder work.

Pick up the flute

The despair is not abstract. 40 percent of US workers say they fear AI will make their job obsolete, up from 28 percent two years ago. 60 percent expect AI to eliminate more jobs in 2026 than it creates. Therapists report a new category of session: the patient who came in to talk about “the fear of becoming obsolete.” Across the online communities where developers and knowledge workers gather, the same sentence keeps appearing: “I have no idea where the programmers of 2060 are going to come from.”

The despair is real. It is also, in places, ahead of the data.

A Harvard study from earlier this year found that when companies adopt generative AI, junior developer employment drops about 9 to 10 percent within six quarters. That number is real, but it is the learning gap arriving early, not the start of extinction. Companies have not yet figured out how to pair AI with junior developers in a way that makes both more productive, and the cheapest temporary response is to cut the role. The rest of the more dramatic numbers you read about is traditional cost-cutting dressed up in AI rhetoric. The doom story is being told louder than the data warrants, by people who are afraid and by companies who find the story convenient.

This is where the flute matters again.

You cannot opt out of AI. You can refuse to learn it, but the vendor is still in the market, and people are still buying flutes. The world does not wait for you to pick one up.

None of this is being made easier by the companies. Most organizations are not providing the sheet music - the redesigned workflows, the protected practice time, the patient feedback loops that would let a learner make music sooner. That is not your fault. But the playing has to start somewhere.

What you can do is decide to be the vendor who plays. AI is a multipurpose multiplier. It will be useless in the hands of someone who treats it as a magic spell, and powerful in the hands of someone who treats it as a tool. Both of those people exist already. You get to choose which one you become.

This is what the optimism asks of you, concretely. Pick up the tool, sit with it, learn its real strengths and weaknesses, and figure out the small set of problems where it actually changes what you can build. That is the creative work this post has been about. The work is slow, hard, and not cheap. It is also available to anyone willing to pick it up. Trust the rest. It always gets figured out, by people who picked up tools and learned them.

If you are reading this and you are scared, that is fair. The vendor is still playing. The world is going to keep moving. Doom is cheap. The playing is not. Walk over, buy a flute, and start practicing.

References

Co-written with AI. Credit the prose, blame the opinions.