ENZH

You're Worrying About the Wrong AI Timeline

Two pieces of research landed on my desk the same week.

Bojie Li's analysis calculates that global compute currently supports 6.8 million "digital workers" — AI agents that work 40-hour weeks like humans. His projection: 14 billion by 2030, 720 billion by 2035, at a cost dropping from $2,950/month to $4/month. Every human commanding a team of 9 digital assistants.

Anthropic's labor market study says: "limited evidence that AI has affected employment to date." Computer programmers — the most exposed occupation at 75% task coverage — show no systematic unemployment increase. The aggregate signal is "indistinguishable from zero."

One report paints a world where digital workers outnumber humans 72 to 1. The other says the revolution hasn't even started yet.

Both are right. And the gap between them is the most important thing to understand about AI's impact on work.


Amara's Law

Roy Amara, a Stanford researcher, observed a pattern that keeps repeating: we overestimate the effect of a technology in the short run and underestimate it in the long run.

ATMs were supposed to kill bank tellers. Instead, per-branch teller count dropped from 20 to 13, but banks opened 43% more branches. Total teller employment rose. The job transformed from cash handler to relationship manager.

Spreadsheets were supposed to kill accountants. VisiCalc compressed financial modeling from 20 hours to 15 minutes. Instead of eliminating accountants, spreadsheets created 600,000 more accounting jobs — because suddenly everyone needed financial analysis, not just the Fortune 500.

The internet was supposed to destroy retail. The dot-com crash in 2000 seemed to prove the skeptics right. Then e-commerce quietly grew for two decades and now accounts for 22% of all global retail — a transformation that vastly exceeded even the most breathless 1999 predictions.

Every single time: short-term fear → crash → long-term transformation that dwarfs the original hype.

AI is following the same script. And we're in the "short-term fear" chapter right now.


The Short-Term Panic Is Overblown

Let me be specific about why.

The adoption gap is enormous. Deloitte's 2026 State of AI report found that 78% of companies have experimented with AI, but only 27% have enterprise-wide deployment. Only 21% have redesigned workflows around AI. Only 2% have agentic AI at full scale. The technology exists in demos and pilot programs, not in production replacing workers at scale.

Costs are still prohibitive. Li's own analysis shows the current end-user cost of a digital worker is ~$2,950/month. That's competitive with a junior employee in some markets, but not when you factor in that AI agents still need human oversight, can't handle GUI-heavy workflows efficiently, and produce code with 1.75x more logic errors than humans. The ROI doesn't pencil out for most use cases today.

Computer-use tasks — the bulk of office work — are at human parity, not above it. Claude Opus 4.6 scores 72.7% on computer use benchmarks. Humans score 72.4%. Roughly the same — except agents take 1.4-2.7x more steps and get slower as tasks get longer. For anything requiring a GUI (which is most enterprise software), digital workers are not yet faster than humans.

The Anthropic data confirms it. No systematic unemployment increase for AI-exposed workers. The unemployment rate for the most-exposed quartile sits at ~3%, same trajectory as unexposed workers. Anthropic's framework can detect a 1 percentage point shift — the signal simply isn't there yet.

If you're a knowledge worker losing sleep over AI taking your job this year, the data says you can exhale. Not because the threat isn't real — but because the timeline is wrong.


The Long-Term Impact Is Wildly Underestimated

Now the other side.

The cost curve is relentless. AI inference costs have fallen 280x from 2022 to 2025, with acceleration after January 2024. Li projects digital worker costs dropping from $2,950 to $72/month by 2030. Even if his timeline is optimistic by 2-3 years, the direction is unambiguous. When a digital worker costs less than a Netflix subscription, the economics of knowledge work change permanently.

The young worker canary. Anthropic found "suggestive evidence" of slowed hiring for workers aged 22-25 in AI-exposed occupations. External data is more alarming: the Dallas Fed reports a 3+ percentage point decline in job-finding rates for young workers in high-AI occupations since late 2023. SignalFire VC data shows a 50% decline in new role starts for recent graduates at major tech firms. Junior developer job postings have fallen 67%.

This is the leading indicator. Companies aren't firing senior engineers — they're not hiring junior ones. The entry-level pipeline is quietly drying up while everyone debates whether AI "really" replaces jobs.

The BLS is already pricing it in. Bureau of Labor Statistics 2024-2034 projections explicitly incorporate AI impacts. For every 10 percentage point increase in AI coverage, the projected employment growth drops by 0.6 percentage points. Admin, sales, paralegal, translator, and graphic designer roles are all projected to decline.

The "Great Inversion" is structural. Li describes a three-stage shift:

  • 2026: Human decides → human executes → AI assists
  • 2030: Human decides → AI executes all digital work → human does physical work
  • 2035: AI decides and executes all digital work → AI hires humans for physical tasks

This sounds extreme until you look at where we already are. 36.3% of new startups are solo-founded — doubled from 2017. Individual developers are shipping products that 10-person teams built two years ago. The structural shift from teams to individuals isn't a prediction; it's already happening.


The Swiss Watch Argument (And Why It's Incomplete)

Li makes an elegant analogy: when electronic watches surpassed mechanical ones in precision and cost, Patek Philippe didn't die — "completed by human craftsmen" became the value proposition itself. He argues that when AI handles all information work, human presence becomes the premium good. Therapy, education, art, craftsmanship — demand for these won't shrink, it'll explode.

I mostly agree with this. But it's incomplete.

The transition path matters. Not everyone who loses a programming job becomes an artisan therapist. Skill conversion takes time, money, and institutional support that doesn't exist yet. The watchmaking analogy works for the steady state — but the decade between here and there is where the pain concentrates.

More importantly, the Swiss watch economy employs ~60,000 people in Switzerland. The global quartz watch industry employs millions. When a profession premiumizes, the number of jobs in the premium tier is a fraction of the original. Most former accountants didn't become "artisan accountants" — they became data analysts, a different job entirely.

The real lesson from history isn't "your job gets fancy." It's "entirely new jobs appear that nobody predicted." Nobody in 1990 would have guessed "social media manager" or "UX researcher" or "DevOps engineer." The new jobs AI creates will be equally unrecognizable from today's vantage point.

But there's one category I'm confident about: the person who orchestrates AI to build things.


Become the Orchestrator

Sam Altman has a betting pool with tech CEO friends for the first one-person billion-dollar company. Dario Amodei warns the "centaur phase" of human-AI collaboration "could be brief." Jensen Huang says "everyone should be able to make a great living" because AI democratizes capability.

I think they're all right, and the practical implication is clear: the highest-leverage position in the next decade is the person who can decompose problems and direct AI agent teams to solve them.

This isn't speculative. I've been living it.

PanPanMao — 10 apps in 29 days, solo with Claude Code. Mio — from empty repo to voice AI companion in 4 days, then a complete product pivot, one person. I wrote about this in Part 3: The Super Individual — the thesis that systematic thinking and orchestration ability are the meta-skills of this era. Everything I've seen since has reinforced it.

The data backs it up. Solo founders now represent 36.3% of new startups (Carta, H1 2025), up from 17% in 2017. 52.3% of successful startup exits were achieved by solo founders. Pieter Levels runs a $3M/year portfolio of products with zero employees and 90%+ margins.

The Gartner numbers tell the supply side: 1,445% surge in multi-agent system inquiries from Q1 2024 to Q2 2025. 40% of enterprise applications will feature task-specific AI agents by 2026. The infrastructure for agent orchestration is being built right now — MCP, A2A, OpenAI Agents SDK — and the people who learn to use it first will have a massive head start.


Practical Guide: From Worker to OPC

If you're convinced — or even curious — here's what I've learned about operating as a One Person Company with agent teams.

1. Learn to decompose, not to prompt.

Prompt engineering is a transitional skill. Models get better at understanding ambiguous instructions every quarter. What doesn't get automated is the ability to look at a messy, ill-defined problem and break it into clear subproblems with defined success criteria. This is architecture. This is what you do before you touch any AI tool.

I keep a running "what does done look like" doc before every build. Not glamorous. But every time I skip it, I waste a day going in circles.

2. Think in parallel, not sequential.

The biggest unlock in agent orchestration is parallelism. When I research a topic, I don't ask one agent to do everything — I spin up 5-10 specialized agents simultaneously. One validates data claims, one cross-references labor market studies, one investigates historical precedents. This article was researched by three parallel agents in under 10 minutes. A human research team would take days.

The skill is knowing how to partition work so agents can run independently without blocking each other. This is distributed systems thinking applied to knowledge work.

3. Ship ugly, kill fast.

AI makes the cost of trying near zero. Build three versions, throw away two. Mio v1 was rough and built on wrong assumptions — but without shipping it, I'd never have watched real users interact with it, and without that I'd never have found the v2 pivot. The first version is a research instrument, not a product.

When something isn't working, move on within days. The temptation to polish a losing hand is the biggest time sink.

4. Taste is the last moat.

When everyone has the same AI tools, the differentiator is judgment. Which problem to solve. Which of five generated options is actually good. Whether the UX "feels right." Whether the copy hits the right tone. AI generates infinite options — you curate. That curation skill comes from building things and watching them break. There's no shortcut.

5. Distribution > building.

This is the counter-argument that solo builders need to hear: "In 2026, building is easy and getting noticed is the real challenge." AI equalizes the building side. What it doesn't equalize is trust, relationships, audience, and brand. The OPC that ships a great product but can't get distribution loses to the mediocre product with a built-in audience every time. Build your distribution channel — newsletter, social, community — before you need it.

6. Guard your mental health.

72% of entrepreneurs face mental health struggles. Solo founders carry the entire cognitive and emotional load without partners to share it. This is real. Build a support system — peers, mentors, communities. The isolation loop (withdraw → feel worse → withdraw more) is the silent killer of solo ventures.


The Timeline That Matters

Here's my read on the actual timeline, triangulating Li's analysis, Anthropic's data, and historical precedent:

2026-2027 (Now): The fear is louder than the reality. AI adoption is early. Costs are high. Most enterprise workflows are still human-driven. The pain is concentrated in entry-level hiring, not mass layoffs. This is the window to retool. If you're in a knowledge work role, this is your ATM-to-relationship-banker moment. Learn to orchestrate AI now, while your job still exists and gives you the domain knowledge to orchestrate well.

2028-2030: The inflection. Model capability clearly surpasses human level on most knowledge tasks. Costs drop 10-40x through distillation and competition. Digital workers go from "luxury tool" to "public utility." The companies that didn't adapt start losing to the ones that did. The job market feels it for real — not just at the entry level.

2031-2035: The new normal. One person + 100 agents = a company. Monthly cost: $27-$4. Entirely new job categories emerge that we can't predict today. The "premium human" economy takes shape — therapy, education, craft, physical presence. The people who retooled in 2026-2027 are running the show. The ones who didn't are in painful transition.

The window is open. It won't stay open forever.


Part 1 showed that benchmarks don't matter — systems do. Part 2 showed that AI compute is being given away at absurd discounts. Part 3 made the case for the super individual. Part 4 dissected why the same tech creates mass hysteria in one country and a shrug in another. Part 5 reexamined the intelligence curse.

This one is about time horizons. The short-term threat is overblown. The long-term transformation is bigger than anyone's pricing in. And the right response isn't panic or denial — it's positioning.

Amara was right. The question is which side of his law you end up on.


This post is also available in Chinese (中文版).


© Xingfan Xia 2024 - 2026 · CC BY-NC 4.0