ENZH

The Intelligence Curse, Revisited

In late 2024, two essays landed in my reading queue within a week of each other. Luke Drago's "The Intelligence Curse" compared AGI to oil — the resource curse rewritten for intelligence. When powerful actors can buy cognition directly, they lose every incentive to invest in people. Rudolf Laine's "Capital, AGI, and Human Ambition" made the complementary argument: whatever wealth distribution exists at the moment AGI arrives gets permanently locked in. Money converts to superhuman capability. The ladder pulls up behind you. Outlier success dies.

Both essays were well-argued, directionally alarming, and published before any real data existed to test them.

Now data exists. Anthropic published its labor market impact analysis on March 5, 2026. There's a radar chart in it that I keep going back to — AI theoretical exposure by occupation, plotted against actual adoption. The blue (exposure) is enormous across knowledge work: computing, finance, management, legal, administration. The red (actual adoption) is a thin sliver inside the blue. Physical labor categories — construction, food service, maintenance, transportation — barely register on either axis.

That gap between blue and red is the whole story right now. Time for a scorecard.


What They Got Right

Entry-level collapse

This is the most validated prediction from both essays, and the numbers are worse than either author projected.

Junior tech hiring fell 67% between 2023 and 2024. US programmer employment dropped 27.5% over the 2023-2025 window. Employment among 22-25 year old developers is down 20%. Anthropic's own data shows young worker job-finding rates down 14% overall.

76% of employers are now hiring equal or fewer entry-level workers than the year prior. CS graduate unemployment hit 6.1%. The bottom rungs of the career ladder aren't creaking — they're missing.

Freelancers as canary

Ramp's spending data is the cleanest signal I've seen. Enterprise spending on freelance platforms went from 0.66% of total spend to 0.14%. In the same period, spending on AI model providers went from 0% to roughly 3%. Direct substitution, quantified in corporate credit card data.

Freelance writing volume dropped 30%. Freelance dev work dropped 21%. Translation is ground zero — some translators reported going from 50-60 hours per week to zero requests. Machine translation post-editing pays one-quarter of traditional rates when work exists at all.

Capital concentration

Laine predicted wealth would concentrate around AI's controllers. He was conservative.

AI absorbed 50% of all global venture capital in 2025. OpenAI ($840B valuation) and Anthropic ($380B) together accounted for 14% of ALL venture capital deployed worldwide. Not 14% of AI VC — 14% of everything. Nvidia hit $4.4T market cap. Big Tech collectively committed $650B in AI capex for 2026.

Drago's "few actors controlling intelligence" framing looked hyperbolic when he wrote it. It looks descriptive now.

The hollowing out

Wall Street firms are planning 200,000 job cuts over the next 3-5 years. Accounting AI adoption jumped from 9% to 41% in a single year. TCS laid off 12,000. India tech salaries fell 40%.

The $80K-salaried white-collar knowledge worker — the person who thought they were safe because their job "requires judgment" — turns out to be the most exposed category. That's exactly what both essays predicted.


What They Got Wrong

No jobs-pocalypse (yet)

Yale Budget Lab found "no significant differences in unemployment rates for AI-exposed occupations through November 2025." 55,000-76,000 jobs were directly attributed to AI layoffs in 2025 — 4.5% of total layoffs, rising to 8% in early 2026. Those are real numbers, but they're not civilizational collapse. The disruption is sectoral, gradual, and deeply uneven. Not a cliff.

AI-washing

This is the finding that surprised me most. HBR reported in January 2026 that companies are laying off workers because of AI's "potential, not its performance." 55% of employers who made AI-driven layoffs regret the decision. 95% of AI investments have shown zero ROI.

A lot of the "intelligence curse" is companies performing AI transformation for investors and boards rather than actually transforming anything. The layoffs are real, the displacement is real, but the proximate cause is often narrative, not technology. People are losing jobs to a PowerPoint deck about AI, not to AI.

Governments aren't retreating

Laine's framework predicted states would lose incentive to invest in human capital once intelligence could be purchased directly. So far, the opposite. The No Robot Bosses Act is moving through Congress. California, Colorado, and Illinois have passed AI employment legislation. The UK is debating UBI explicitly tied to AI displacement. The EU AI Act includes employment protections.

Democratic governments are responding with regulation, not retreating into irrelevance. This doesn't mean they'll succeed — but the prediction that states would simply capitulate to AI capital was wrong on the current timeline.

Open source shattered the capital moat

Neither essay anticipated this. DeepSeek V4 trained for $5.6M — roughly one-tenth the cost of Meta's Llama. When that number came out, Nvidia lost $600B in market cap in a single day. Arcee — 30 people — built a 400B parameter model that matched Meta on benchmarks.

Capital concentration is real but the moat is more porous than either Drago or Laine assumed. Intelligence, unlike oil, has a marginal cost that keeps falling. The resource curse analogy breaks down precisely here: oil stays expensive because geology says so. Compute gets cheaper because physics says so. Open source accelerates that.


What Nobody Predicted — The Bifurcation

The original essays framed the future as binary: AI haves vs. have-nots, capital vs. labor, winner-take-all. The reality is messier and, in some ways, more disturbing.

The winners

AI skills now command a 56% salary premium, up from 25% two years ago. AI Engineer as a job title grew 143%. Solo founders account for 36.3% of new ventures — one of them built an $80M exit in six months. Amodei predicted the one-person billion-dollar company would arrive by the end of 2026.

This connects directly to Part 3 — the super individual is real, and the returns to individual capability are compounding.

The losers

"AIRD" — AI Replacement Dysfunction — is now a formally recognized clinical framework. 71% of workers report fearing permanent AI displacement. India's tech sector saw pay drop 40%. Entry-level developers can't get interviews. Translators can't get work.

The centaur window

Amodei has been talking about the "centaur era" — the window where human+AI teams outperform pure AI. I wrote about this in Part 3. The data supports it: teams using AI tools meaningfully outperform both pure-human and pure-AI approaches on complex tasks.

But Amodei also said the centaur phase may be "very brief — perhaps only a few years." If that's right, the on-ramp to the super individual path is narrowing as you read this. The skills premium is real but the window to develop those skills may not be as long as people assume.

The bifurcation table looks like this:

DimensionWinnersLosers
SalaryAI skills +56% premiumIndia tech pay -40%
EmploymentAI Engineer +143% growthJunior dev -67% hiring
EntrepreneurshipSolo founder $80M exitFreelance writing -30%
Mental healthAgency, leverage, new creative modesAIRD recognized as clinical condition

Same technology, same year, completely opposite experiences depending on which side of the bifurcation you landed on.


The Real Intelligence Curse

Both essays asked: what happens when intelligence becomes a purchasable commodity? The implicit assumption was that the technology itself would be the destructive force. Buy enough intelligence, render humans irrelevant, lock in the gains.

Twelve months of data suggest something different. The real intelligence curse — the one doing damage right now — is the narrative of AI making humans irrelevant, running ahead of the technology itself.

Companies are laying off workers based on AI's potential, not its performance, and 55% regret it. Young people are abandoning CS degrees at the exact moment when AI engineering skills command the highest premium in the market. AIRD is a clinical framework for anxiety about displacement that, for most workers, hasn't happened yet — Anthropic's own data shows 30% of workers have literally zero AI exposure.

97% of tasks are theoretically AI-feasible. 30% of workers have had no contact with AI tools. The gap between those two numbers is where the psychological damage concentrates. People who've never used AI are the most afraid of it, because the story — "AI will take your job" — is more vivid and available than the reality, which is uneven, slow, and deeply sector-dependent.

The intelligence curse in 2026 is more psychological than economic. That doesn't make it less real. People making career decisions based on fear of AI displacement — avoiding technical fields, accepting worse terms, not starting businesses — are experiencing real economic consequences. The narrative reshapes the economy whether the technology justifies it or not.


Scorecard

Drago and Laine asked the right questions. Their directional predictions about entry-level collapse, capital concentration, and freelancer displacement were correct — in some cases more correct than they probably expected. Their predictions about speed (too fast), totality (too complete), government response (too passive), and capital moats (too impregnable) were wrong.

The biggest miss was structural. Both essays modeled the future as a single trajectory — either AI creates a permanent overclass or it doesn't. The actual outcome is a bifurcation. The same technology that eliminated 67% of junior tech jobs also created a 56% salary premium for AI skills. The same market that concentrated 50% of VC into AI also let a 30-person team match Meta's models at 1/10 the cost. The same wave that's putting translators out of work is enabling solo founders to build $80M companies.

The centaur window — where human+AI teams are the optimal unit — is open. Amodei thinks it might close within a few years. The original essays didn't discuss this window at all, which may turn out to be their most consequential omission.

I'd recommend reading both originals. Not because they got everything right, but because the questions they raised are the ones that matter. In Part 1 of this series I argued benchmarks don't tell you much. In Part 2 I showed the compute subsidy is absurd. In Part 3 I made the case for the super individual. In Part 4 I documented what hype looks like from the inside.

This post is the view from twelve months of data. The curve is bending in multiple directions simultaneously, and which direction it bends for you depends on which side of the bifurcation you're standing on. That's not a comfortable conclusion, but it's what the numbers say.


© Xingfan Xia 2024 - 2026 · CC BY-NC 4.0