ENZH

The Super Individual: Why AI's Future Belongs to Architects

Block just cut 46% of its workforce. Jack Dorsey told the remaining employees directly: the savings are going into AI. The headcount isn't coming back.

This isn't 2022-2023 "macro conditions" layoffs. Companies are now openly saying what everyone suspected: AI replaces headcount. Not hypothetically. Now.

The interesting question isn't "will AI take my job?" — it's what kind of person thrives in this new world.

A recent episode of the Chinese tech podcast "科技早知道" (S10E01) crystallized some of this for me. The guest was David Shell — ex-LinkedIn, now CEO of Walnut AI — sharing observations from the Silicon Valley frontlines. A lot of what he said mapped onto what I've been experiencing as a solo builder. This isn't a summary of that conversation — it's my own take, using his observations as a springboard. Worth listening if you understand Chinese.


The Rise of the Super Individual

Over 30% of startup founders are now solo founders.

Building a company used to require assembling a team of specialists. You needed a backend engineer, a frontend engineer, a designer, a product manager, a data analyst. Even a simple MVP demanded 3-5 people coordinating across disciplines. The coordination cost alone — meetings, alignment, communication overhead — consumed a huge chunk of the calendar.

That constraint is evaporating. One person with systematic thinking and AI orchestration ability can now build a real, shipped, revenue-generating product.

David called it plainly: "One-person companies will proliferate massively." I'd go further — they'll compete directly with teams of 10-20 and often win on speed.

My own experience bears this out. PanPanMao — 10 shipped apps in 29 days, solo with Claude Code. Mio v1 — empty repo to working voice companion in 4 days. Mio v2 — complete product pivot, one person. Two years ago any of these would've needed a full team.


AI Doesn't Lack Execution — It Lacks Architects

When David described his hiring framework at Walnut AI, three things stood out: AI symbiosis, systematic thinking, and collaboration. Notice what's missing from that list. He didn't say "prompt engineering." He didn't say "knows all the latest AI tools." He said systematic thinking.

The AI can already write code, generate designs, analyze data, draft strategies. Execution power is abundant and getting cheaper — as I showed in Part 2, compute is practically being given away right now.

The bottleneck is knowing what to execute.

Naval Ravikant's point is worth repeating: as AI improves, it adapts to you. The models get better at understanding ambiguous instructions, inferring context, working with messy inputs. The skill of carefully crafting the perfect prompt becomes less important over time, not more. What becomes the entire game is your ability to clearly define the problem.

First principles thinking beats prompt engineering. Start from the fundamentals: what specific problem are you solving? Can you decompose it into subproblems? Can you identify which subproblems are critical path and which are noise? Can you define what success looks like before you start building?

If you can do that, the AI will figure out the rest.

Here's a concrete example. Testing used to be a discipline unto itself. Pre-AI teams wrote unit tests for every function, debated coverage thresholds, built elaborate test harnesses. The AI-native approach is different: define the system-level behavior — inputs, expected outputs, edge cases, invariants — and let AI generate the tests. What changed is the execution layer. What didn't change is the systematic thinking required to define what needs to be verified. You still need to reason about failure modes, boundary conditions, and integration points. The AI just writes the test code faster.

David put it simply: "AI's capability is already very strong. What's being tested now is your ability to orchestrate it."

That's the key skill. Not using AI. Orchestrating it.


The 99/1 Paradox

There's a pattern I keep encountering that David articulated perfectly: "AI completes 99% of the work in 1% of the time, but you then spend 99% of your time solving that last 1% of detail."

It doesn't sound paradoxical once you've lived it.

AI gets you to 98-99% astonishingly fast. The first draft is done in minutes. The architecture is roughed out. The code compiles. The feature works in the happy path. You look at the clock and think: this would have taken a week, and it took an hour.

Then you spend the next three days on the last 1-2%. The edge case that crashes on specific input. The UX flow that technically works but feels wrong. The performance issue that only shows up at scale. The accessibility gap. The copy that's almost right but subtly off-tone. The integration that works in dev but fails in production because of an environment variable.

This is where craftsmanship lives. AI can generate infinite code, but it can't tell you which edge case will lose a customer. It can produce five different onboarding flows, but it can't tell you which one will make a new user feel welcomed versus overwhelmed. It can refactor your entire codebase, but it can't tell you whether the refactor serves the product direction or just satisfies an engineering aesthetic.

Every product I've shipped required human judgment calls no AI could make. With PanPanMao, it was deciding which of the 10 apps to prioritize, how they should interconnect, what the brand identity should feel like. With Mio, it was the fundamental product direction — the pivot from persona roleplay to AI-native companion wasn't something an AI suggested. It came from watching real users interact with the product and recognizing a pattern the AI couldn't see.

David's point: "AI sometimes doesn't know what problem to solve. You must tell it." The architect's role is fundamentally about direction.


The Centaur Era

Dario Amodei, Anthropic's CEO, has been talking about the "centaur era" — the period where the most effective unit isn't a human or an AI, but a human fused with AI capabilities. Half human judgment, half machine execution. The centaur.

I think this framing is exactly right, and it maps onto what the super individual actually is. You're not replacing yourself with AI. You're augmenting yourself. Your agent army extends your cognitive reach into domains you couldn't cover alone.

David gave a vivid example from his own practice. He built a system that analyzes 1,000 conference attendee profiles in 1-2 minutes — identifying who to meet, what to discuss, what the mutual value exchange might be. The manual version of this takes 10+ hours and you still miss people. His system doesn't replace his networking judgment. It extends it across a dataset he could never process manually.

That's orchestration in practice. Not "AI, do my networking for me." Instead: "I know what makes a valuable connection. AI, apply that judgment at a scale I physically can't."

Another example: an 18-year-old built a cardiac arrhythmia detection app using AI tools. Not a CS student. Not a medical device engineer. Someone with domain curiosity, access to AI tools, and — critically — the systematic thinking to orchestrate those tools into a coherent product. The catch: they didn't just ask ChatGPT to "make a heart app." They decomposed it — what data do I need, how do I process it, what constitutes a detection, how do I validate. That decomposition is the human part. The AI handled the subproblems. The human defined what the subproblems were.


Aesthetic Judgment Is the Last Moat

David made a claim I keep coming back to: "Human aesthetic value will never be replaced by AI."

When AI can generate unlimited everything, the scarce resource shifts from production to curation. Taste. Knowing which of the options is actually good. The architect matters more in an AI-saturated world, not less, because someone has to choose — and choosing well requires judgment that isn't reducible to a training objective.

I see companies measuring developer productivity by token consumption — how many AI-generated lines of code did you produce today? This completely misses the point. It's not how much AI you use. It's the quality of what you direct it to produce. A developer who writes 50 lines of carefully architected code that solves the right problem is worth more than one who generates 5,000 lines of AI slop that solves the wrong problem quickly.

When everyone has the same AI tools, the differentiator is the person. Their ability to look at AI output and say "this is close but the emphasis is off" or "this technically works but it's not what we need." That judgment comes from making things and watching them break. AI speeds up the cycle but doesn't shortcut the taste-building.


Advice for Builders

In Part 1, I argued that benchmarks don't matter — what matters is the system, not the model. In Part 2, I showed that AI compute is being massively subsidized right now — the value you get far exceeds what you pay. This post is about the human element: the tools and compute are ready, but the differentiator is you.

A few things I've learned the hard way:

Ship ugly. Mio v1 was rough. Mio v2 threw out almost everything from v1. That's fine — the point is that v1 existed at all, because without shipping it I would never have watched real users interact with the product, and without that I would never have recognized the pivot that v2 is built on. The first version is a research instrument, not a product.

Kill your darlings fast. When I was building PanPanMao, some of the 10 apps got traction and some didn't. The ones that didn't, I moved on from within days. AI makes the cost of trying so low that you should be building three versions and throwing away two. I'm still not great at this — there's always the temptation to polish something that isn't working instead of starting over.

Learn to decompose, not to prompt. The meta-skill that compounds is breaking problems into subproblems. I keep a running doc of "what does done look like" before I start any build. It's not glamorous. But every time I skip it, I waste a day going in circles with the AI.

Things I'd avoid:

Benchmark shopping. I've watched people spend weeks evaluating whether GPT-5 is 3% better than Claude on some leaderboard while shipping nothing. The conditions right now — subsidized compute, rapidly improving tools — won't last. As I covered in Part 2, the math is temporarily absurd.

Being an AI commentator instead of an AI user. I know people who can give you a 45-minute talk on the AI revolution and have never built anything with it. The gap between understanding AI and using AI to build things is enormous. I only started to actually understand agents after I had Claude Code orchestrate parallel tasks for PanPanMao — reading about it is not the same thing.

The systematic thinking part — the orchestration, the problem decomposition, the taste — none of that is downloadable. It comes from building things and watching them break. I'm not sure there's a shortcut.


© Xingfan Xia 2024 - 2026 · CC BY-NC 4.0