What I Learned Shipping a Real Product as a Solo AI-Augmented Developer
In Part 1, I explained why. In Part 2, the how. Now the honest lessons. Not the "everything went great" version. The real one.
On AI Bridging Domain Knowledge
The core hypothesis: can AI bridge a domain knowledge gap? Yes, with significant caveats.
What worked: 20,000 lines of BaZi calculation logic. 1,000+ dream symbols. 78 tarot cards with detailed frameworks. All researched and implemented through AI. Users who know BaZi say the calculations are correct. The domain knowledge is credible.
The caveat: you need to know HOW to ask the right questions. I spent the first few days asking Claude to teach me the domain -- not to write code, to explain concepts. Once I had a mental model, I could ask implementation-level questions that produced usable output.
Zero knowledge is possible. Zero curiosity is not.
On Product vs. Engineering
Severely underestimated:
- Landing page iteration (3 full redesigns -- positioning problems, not code problems)
- Pricing psychology (more time on credit packages than any API endpoint)
- Conversion funnels (7 contextual upgrade triggers, each tailored to its emotional moment)
- Retention design (Daily Hub exists purely for daily active usage)
Overestimated:
- Architectural perfection (shipping beats polish)
- Test coverage (cover critical paths, move on)
The decision to name credits "dried fish treats" instead of "tokens" was a product decision. People are more willing to spend 5 fish treats than $0.70. The abstraction reduces psychological pain of spending.
On the Agentic Coding Workflow
97% AI-generated code does NOT mean 97% less work. It means 97% less typing and roughly 10x more shipping.
My daily work: architecting, reviewing, prompt engineering, course-correcting, testing. The claude/ and codex/ branches in git are where AI agents proposed changes that I reviewed and merged.
AI is better at consistency than humans. Claude applied API endpoint patterns more uniformly than I would have. Humans get bored; AI doesn't.
AI struggled with emotional intelligence. Fortune reading tone -- empathetic but not patronizing, specific but not prescriptive -- required extensive manual prompt iteration.
On Being Solo
No meetings. No standups. No Jira. Just git commit and ship. The freedom was intoxicating.
The downside: no one catches blind spots. No one asks "are you sure users want this?" The discipline of knowing when to stop iterating is harder alone.
Counterbalance: an automated CI bot that filed 32 bug reports over the project. AI-powered QA as a rudimentary "someone else looking at your code."
On Credit Economies
The referral system taught me incentive design in real-time. Asymmetric rewards (10/3) made sharing feel extractive. Symmetric (5/5) made it feel like giving a gift. Referral volume went up.
Bundle psychology is real. The "Most Popular" badge isn't just a label -- it's an anchor. Removing it decreased conversion on that tier.
The Meta-Lesson
The era of the solo AI-augmented builder is here. One engineer plus AI can build what used to require 5-10 people:
- 1,134 commits in 29 days
- 9 product verticals with real business logic
- 284,000 lines of code
- Working credit economy with real paying users
- Built in a domain the developer knew nothing about
The minimum viable team for a multi-product platform has collapsed from "5 engineers, 1 PM, 1 designer" to "1 engineer with good taste and good AI tools."
The engineers who thrive won't be those who type fastest. They'll be those who architect best, review most critically, and ask the sharpest questions.
PanPanMao is live at panpanmao.ai. The git history speaks for itself: 1,134 commits, 97% AI-assisted, and I still can't read a palm.
Find me at ax0x.ai.