The Barrier to Investing Just Changed — An AI-Native Investor Curriculum
Chapter 1 of 10 in the AI-Native Investor curriculum.
One sentence, ten seconds, a few cents of API cost — an agent can now produce the investment data analysis that used to require 300 lines of Python and a full day of work. The output is usually cleaner than what you would write by hand, with explanations included.
This means coding skill is no longer the barrier to investment analysis. The barrier moved to judgment: once you have the analysis, can you tell whether it is right, wrong, or missing something critical?
Producing analysis and judging analysis are different skills. Most investment education — including the original version of this series — teaches the first. This series teaches the second. Ten chapters, zero code.
Three claims this curriculum rests on
1. The bottleneck moved. Writing Python used to be the prerequisite for doing investment analysis without a Bloomberg terminal. That prerequisite dissolved when agents learned to write it — not for free (tokens and API calls cost money), but fast enough and reliably enough that coding skill is no longer the barrier. The new bottleneck is judging whether the agent's output is correct, relevant, and complete. This is a different skill from coding, and most programmer-investors have not built it yet.
2. The new bottleneck requires four specific skills that traditional investment education does not teach. Verifying agent output rather than accepting it. Tracking how your beliefs change over time so they cannot drift silently. Designing research workflows that break a judgment-heavy task into agent-sized pieces. Understanding financial models well enough to know when the agent's model is answering the wrong question. None of these are innate. All of them are learnable. This curriculum teaches them.
3. Two things need to be learned at once: investment fundamentals and agent fleet command. You cannot judge an agent's DCF output if you do not understand what a DCF is — you will accept whatever numbers the agent produces, which are usually consensus estimates dressed up as analysis. You cannot apply that understanding at useful speed without knowing how to delegate, verify, and iterate with a fleet of agents. Neither skill alone is sufficient. This curriculum teaches both, interleaved across ten chapters.
Table of contents
- The bottleneck moved, not disappeared
- You're not the analyst anymore. You're the PM.
- Entry barriers dropped. The judgment bar didn't.
- What this curriculum teaches — two tracks, interleaved
- How to read this curriculum
- Not investment advice
- Workshop: your framework's first pixel
1. The bottleneck moved, not disappeared
Investment education has followed the same pattern for seventy years. A person — institutional analyst or self-taught retail investor, the structure is identical — reads company filings, pulls historical data, builds a model, constructs a thesis, decides what to buy, and bears the consequences. The quality of the investor correlated with how much effort they could put into the first three steps. More filings read, deeper models, more thorough hypothesis testing → better thesis → better returns. Every investment book assumed this correlation held.
It did hold. Then the correlation broke, in roughly eighteen months.
In 2023, a tutorial on reading a 10-K and building an analysis spreadsheet would have been genuinely useful. The reader could spend eight hours implementing the analysis pipeline and emerge with a working evaluation framework for one company. In 2026, the same reader can write one sentence to Claude Code — "read this company's last four 10-K filings and flag meaningful changes in margin structure, debt profile, and cash flow quality" — and get comparable output in under a minute. The output is not always correct, but when it is wrong, it is wrong in diagnosable ways — missing context, stale data, consensus assumptions. Those are fixable. The eight-hour manual process had its own error modes — fatigue, confirmation bias, skipped footnotes — that were harder to diagnose.
This looks like a productivity improvement. It is not. It is a shift in where the bottleneck sits, and the distinction matters.
A productivity improvement means the existing skill gets faster. The hierarchy of who-is-good-at-this is preserved. A bottleneck shift means the old skill ceases to be the constraint, and a different skill takes its place. The hierarchy reorders around the new skill.
In 2026, the ability to write code or operate a Bloomberg terminal is no longer the constraint. Anyone with an agent and an API key can produce fundamental analysis. The constraint is now: can you tell whether the output is right? Do you know which of the agent's assumptions to question? Do you have a framework for deciding whether the thesis the agent helped you articulate is actually your thesis, or consensus in your voice?
Those skills are orthogonal to coding speed. They do not get faster because agents exist. They also do not get slower. They are simply a different category of skill, and the entire value of amateur investing now concentrates in them.
What this looks like in practice
Today, anyone can write: "Pull the last five years of financials for NVDA, compute standard efficiency ratios, flag anything unusual relative to the semiconductor sector median." The agent produces the output — including commentary on what each anomaly plausibly means — in under a minute, for the cost of a few API calls. No coding required. No Bloomberg required.
The investment concepts are the same as they always were: what ratios matter, what "unusual" means in context, what follow-up questions to ask. The access mechanism changed completely. The old curricula that taught the access mechanism as the skill were teaching the wrong thing.
What you lose in this transition
There is a real cost to delegating analysis to agents, and ignoring it would be dishonest.
When you build an analysis pipeline by hand, you develop a tactile familiarity with the data. You notice things by accident — an unusual line item in a footnote, a pattern in quarterly seasonality, a discrepancy between two data sources that the agent would silently reconcile. That accidental discovery is a genuine source of edge, and it is harder to replicate when an agent does the work.
There is also an echo-chamber risk. Agents default to consensus: they pull consensus estimates, use consensus discount rates, and frame their analysis in consensus narratives. If you delegate without pushing back, you get consensus dressed as original research. You will feel informed and confident. You will also be wrong in exactly the same way as everyone else, which is the worst kind of wrong in investing.
This curriculum does not pretend those costs are zero. It teaches you how to mitigate them — through verification patterns, deliberate red-teaming, and the habit of arguing with your agent's output rather than accepting it. But the mitigation is not costless, and it requires effort you would not have had to spend in the old model.
The argument for the new model is not that it is purely better. It is that the old model's central requirement — either pay for Bloomberg or learn to code — is no longer the meaningful filter for who can invest seriously. The filter moved to judgment, and this curriculum is for anyone who wants to build that judgment deliberately.
2. You're not the analyst anymore. You're the PM.
Here is the division of labor at a mid-market hedge fund.
A handful of junior analysts each cover a sector or a set of companies. Their job: read filings, build models, monitor news, run scenarios, produce research memos. They do not decide what to buy. That is not their role. Their role is to surface evidence and arguments.
Above them is a portfolio manager. The PM reads the analysts' memos, asks pointed questions, pushes back on weak reasoning, decides whose work to trust on a given day, and makes the actual allocation decisions. The PM rarely reads a 10-K end to end. The PM's job is judgment: knowing which analyst is right, which is rationalizing, and which is missing something.
In 2026, you have a version of this structure available to you. Your analysts are agents. They work around the clock, at speeds no human junior can match, for the cost of API calls. Each agent can be specialized — one for reading filings, one for tracking news, one for running valuations, one for red-teaming your thesis. The PM is you. The judgment is yours.
Where this analogy breaks
The analogy is useful but imperfect, and it is worth being honest about the gaps.
Real hedge fund PMs typically have ten to twenty years of experience. Their judgment was built through thousands of investment decisions, many of them wrong, with real money on the line. You probably do not have that. This curriculum can give you frameworks, but frameworks are not a substitute for pattern recognition built over years.
Real analysts push back. A good junior analyst will tell the PM "I think you're wrong about this, and here's my data." Agents mostly do not push back unless you explicitly design them to — and even then, the pushback is softer and more agreeable than a human analyst with career incentives to be right. The echo-chamber risk is real.
Real PM-analyst trust is calibrated over years of working together. You learn which analyst is conservative, which is aggressive, which has blind spots in certain sectors. With agents, you are perpetually working with a new team that has no persistent memory of your preferences or its own track record.
The analogy still holds as a structural model — you delegate research and retain judgment — but the gap between the metaphor and the reality is where most of the mistakes will happen.
Four skills the PM role requires
These are not innate talents. They are learnable skills. None of them are taught in traditional investment education, and all of them become critical once agents handle the analytical grunt work.
Verification discipline. When an agent gives you a valuation, the instinct to accept it is strong — the output looks professional, comes with numbers and explanations, and arrives in seconds. The skill you need is the opposite instinct: interrogating the output. What assumptions did the agent make? What would change if you altered one? Is the conclusion sensitive to inputs the agent chose by default? Most people do not have this habit naturally. It has to be built through practice. Chapters 5 and 8 develop it explicitly.
Thesis discipline. Most investors — retail and professional alike — carry beliefs about their holdings that mutate silently to match reality. Their memory of "why I bought this" drifts over time to match what they wish they had thought. The skill you need is writing your thesis down in a specific, falsifiable form, and reviewing it periodically against what actually happened. This is how you prevent belief drift — the quiet destroyer of investing discipline. Chapter 7 covers the structure.
Workflow design. "Research this company" is too vague to delegate effectively to an agent fleet. The skill you need is decomposing that into specific subtasks — read the 10-K for red flags, compute a valuation under three scenarios, stress-test the thesis against bearish counterarguments — with verification checkpoints between them. This is the core of fleet architecture: breaking a judgment-heavy task into agent-sized pieces without losing the judgment in the decomposition. Chapter 8 formalizes this into five patterns.
Model literacy. Every financial model — DCF, multiples, scenario analysis — is a deliberate simplification of reality. The skill you need is knowing which simplifications are acceptable and which are dangerous for the question you are asking. An agent will compute any model accurately and quickly. It will not tell you that the model's assumptions make it useless for your specific situation. That judgment is yours. Chapters 3 through 6 build this literacy through first principles, financial statement patterns, valuation, and risk.
3. Entry barriers dropped. The judgment bar didn't.
Before 2023, doing serious investment analysis as an individual required one of three things: working at a fund with institutional data access, paying for Bloomberg or FactSet (~$24,000/year per seat), or grinding through the work manually in Excel.
The manual path looked like this in 2020: you want to compare three companies in a sector. The free tools — Yahoo Finance, SEC EDGAR, the company's IR page — give you raw data with no aggregation. No cross-company comparison. No anomaly flagging. No scenario modeling. You open Excel, download three 10-Ks, and start copying numbers. By the time you have normalized the data across three companies, your Saturday is gone and you have not started analyzing. Meanwhile, an analyst at a fund ran the same comparison in one click on their Bloomberg terminal and spent the rest of the afternoon on the actual interesting question.
The gap was not a skill gap. It was a tooling gap, gated by a $24,000/year license.
The technically inclined had a workaround: write the code yourself. This was the premise of books like Stefan Papp's Investing for Programmers and the original version of this curriculum. The premise was correct for 2023 — if you could code, you could close the tooling gap that Bloomberg's pricing created.
That workaround became unnecessary when agents learned to write the code. Not for free — API calls and compute have real costs, typically a few dollars per deep research session — but at a cost and speed that made both the Bloomberg license and the coding skill irrelevant as barriers. The barrier moved to judgment, which was always the hard part. It used to be hidden behind the tooling gap.
What "easier to start" does not mean
The drop in entry barriers is real, but it is easy to overinterpret.
More people entering the market with agent-assisted analysis does not make investing easier. It raises the average quality of research, which means the bar for having an actual edge goes up. If everyone can produce a solid-looking analysis in ten minutes, the analysis itself stops being the edge. The edge moves to: what question did you ask that nobody else asked? What assumption did you challenge that everyone else accepted? What did you notice in the agent's output that other people's agents also produced but that other people did not catch?
This is not a level playing field with institutional investors. Institutions still have proprietary data feeds, alternative data (satellite imagery, credit card transaction data, supply chain tracking), and decades of accumulated sector expertise. What changed is that the analytical tooling gap on public data largely closed. The other gaps — data access, relationships, experience — did not.
For anyone entering this space, the honest framing is: the grunt-work barrier is gone. The judgment barrier that was always there is now the only one left, and it is the barrier this curriculum is about.
4. What this curriculum teaches — two tracks, interleaved
This curriculum teaches two things simultaneously, and if you only learn one, you will not have what you need.
Track A — Investment fundamentals
The mental models, frameworks, and decision rules that serious investors have used for decades. None of this content is new. All of it is load-bearing — you cannot judge an agent's output without it.
- First principles (Chapter 3): what a stock actually is. Three sources of return. Why markets are hard to beat. Where edge realistically comes from.
- Reading a company as patterns (Chapter 4): three financial statements as three lenses — business structure, fragility, truth vs. fiction. Pattern recognition over line-by-line reading. Case studies: Enron, Costco vs. Walmart.
- Valuation as narrative (Chapter 5): Damodaran's framework — every number in a DCF is a story choice. Why agents default to consensus stories. Sensitivity analysis to find the assumption your valuation secretly depends on.
- Risk (Chapter 6): why volatility is a poor proxy. Four risks that actually matter: drawdown, fat tails, correlation breakdown, liquidity. Position sizing as a psychological problem. LTCM as cautionary tale.
- Thesis discipline (Chapter 7): six elements of a defensible thesis. Red-teaming with agents. Version control for convictions — why your beliefs mutate silently and what to do about it.
If you only read Track A, you will have a solid conceptual foundation — the same one that would have served you in 1995. What you will not have is any way to apply it at the speed the current environment demands. You will know what to look for and have no efficient way to look.
Track B — Agent fleet command
The 2026-native skill. This content has no existing textbook, because the problem it addresses did not exist before agents became capable enough to function as research assistants.
- Four agent roles (Chapter 2): analyst, red team, tutor, executor. Most people default to executor mode. The four-role framework is the foundation of fleet thinking.
- Fleet architecture (Chapter 8): five core patterns — delegation, verification, iteration, composition, failure detection. Three reference workflows for investment research. Why the fleet's unanimous agreement is a warning sign.
- Capstone (Chapter 9): you pick three companies, run the full fleet end-to-end, produce three thesis memos, make one decision. This is not a reading chapter. It is a workshop that takes several days.
If you only read Track B, you will build efficient workflows that produce confident-looking outputs about companies you do not actually understand. Efficient garbage looks like signal. This is worse than having no fleet.
The shape of the ten chapters
Each chapter has a specific deliverable you walk away with.
Chapter 1 — When Coding Skill Stops Being the Barrier (this chapter). You walk away with the framework this curriculum is built on and the first pixel of your own investment framework via the workshop.
Chapter 2 — Your Agent Has Four Jobs, Not One. The four roles — analyst, red team, tutor, executor — and when to use each. You walk away knowing how to assign roles via prompt framing, which is the foundational fleet skill. Also covers the six things agents cannot do for you in any role.
Chapter 3 — What Is a Stock, Really. First principles. Three sources of return: dividends, earnings growth, multiple expansion. You walk away with a vocabulary for decomposing any historical return and asking "which source is my return going to come from, and why do I believe that?"
Chapter 4 — Reading a Company: Patterns, Not Pages. Three financial statements as three lenses. You learn pattern recognition, not accounting. Case studies: Enron in hindsight, Costco vs. Walmart as contrast. You walk away able to direct an agent to read a 10-K and know which signals to prioritize — and when to argue with the agent's reading.
Chapter 5 — Valuation Is a Story Wrapped in Numbers. Damodaran's framework. DCF as a narrative device. Sensitivity analysis to find the story's center. You walk away knowing how to bring the narrative yourself and how to spot the four ways agent-generated valuations go wrong.
Chapter 6 — Risk Isn't Volatility. Four real risks. LTCM's specific failure mechanism. The pre-mortem technique: have an agent write the 2030 post-mortem assuming your position blew up. You walk away able to decide whether you are protected against the scenarios it surfaces.
Chapter 7 — Write It Down or It Isn't a Thesis. Six elements of a defensible thesis. Red-teaming with agent-as-short-seller. Thesis version control. You walk away with a template for writing theses that you can argue with in six months, and a stress-test protocol that runs until attacks stop landing.
Chapter 8 — Commanding an Agent Fleet: Five Core Patterns. Delegation, verification, iteration, composition, failure detection. Three reference workflows for investing. You walk away with the architectural vocabulary to design your own research fleet and an understanding of when to distrust its output.
Chapter 9 — Capstone Workshop: Three Companies, One Position. End-to-end exercise using everything from the previous eight chapters. Expected time: 4–8 hours across multiple sittings. You walk away with a real investment memo — or a carefully reasoned decision not to invest — and your first framework document.
Chapter 10 — From Technique to the Way. Closing reflection. What compounds over twenty-five years is the capacity to ask better questions. The final workshop is the only one you do without an agent.
5. How to read this curriculum
This is not a blog series. It is a ten-chapter course published on the blog. Each chapter takes one to two sittings. Several chapters are closer to book chapters than blog posts in length. That is by design — a curriculum that can be skimmed teaches nothing.
Re-reading is expected, especially Chapters 4–6. Reading companies, valuation, and risk are dense topics. If you followed it on first read, a second read a week later is where the framework crystallizes.
Use agents as tutors. This is the most important meta-instruction in the curriculum. When you hit a concept you do not fully understand, stop reading, open Claude Code, and explain the concept back to the agent in your own words. Ask it to find the gap in your explanation. Iterate until it cannot. Then come back. Chapter 2 formalizes this — tutor is one of the four agent roles, and it is the role you will use most often while working through this material.
Do the workshops. Every chapter has one. A chapter read without its workshop is like a programming book read without typing the examples. The workshops are 30 minutes to 2 hours each, with the exception of the Chapter 9 capstone, which takes several days.
Honest time estimate. Reading: 15–25 hours across ten chapters. Workshops: 15–35 hours. Capstone: 4–8 hours. Total: 30–60 hours of focused work. If you cannot commit to that, the curriculum will not work.
What you get at the end is not a stock pick, a model, or a system. It is a framework you can defend — and the operational ability to run a research fleet against it.
Preview of Chapter 2
The next chapter covers the four roles an agent can play for you: analyst (the default — produces research), red team (attacks your thesis before the market does), tutor (teaches you concepts you need for evaluating the other roles' output), and executor (runs stable workflows at scale). Most people start with executor mode, are disappointed with the results, and either give up or shift to analyst mode. The full skill is fluency in all four.
Not investment advice
This curriculum is educational. Nothing in any chapter is a recommendation to buy, sell, or hold any security. Companies mentioned are case studies, not picks. Past performance does not predict future results. Your investment decisions and their consequences are yours.
This is not a formality. The curriculum exists because the useful future of investing is not "follow these picks." It is "build your own framework and defend it." Telling you what to buy would undermine the thing being taught.
Workshop — Your framework's first pixel
Time: 30–45 minutes. Tools: Any AI agent you already have (Claude Code, Codex, Cursor). Output: A saved document, one to two pages.
Instructions
Step 1 — Pick one stock. Either one you currently own, or one you seriously considered buying in the past year. Not a stock in the news. Not a stock someone on Twitter mentioned. You need a personal stake in being right or wrong about this company. If nothing comes to mind, use a company whose product you personally use and care about.
Step 2 — Ask your agent for two things. A three-paragraph bull case (why the stock will work) and a three-paragraph bear case (why it will fail). Do not specify a format beyond that.
Step 3 — Read both without taking notes. Notice where you nod, and notice where you want to argue.
Step 4 — Mark both cases. Three marks:
- ✓ next to claims you agree with
- ✗ next to claims you disagree with
- ? next to claims you cannot judge
Use only what you already know. Do not research.
Step 5 — Write one sentence at the bottom:
"The claim I disagree with most strongly is __________, and I believe this because __________."
Be specific. "The bull case is too optimistic" does not count. "The bull case assumes data center revenue will grow at 40% annually, and I do not believe that because their largest customer is building an internal alternative" counts.
Step 6 — Save the document with the company ticker and today's date. You will reopen it in Chapter 7.
Why this matters
The "because" clause in that sentence is the first pixel of your framework. The next nine chapters fill in the rest — the concepts that let you turn "I disagree because X" into "I disagree because X, and here are three conditions that would have to hold for my disagreement to be right, and here are the specific signals I am watching for that would change my mind."
You are not trying to be right yet. You are trying to surface what you already believe, in a form specific enough to argue with later.
Do not skip this. The rest of the curriculum needs something to graft onto.
What the barrier looks like now
The old barriers — a Bloomberg terminal, the ability to write code, the stamina to grind through spreadsheets — are gone. What remains is what was always underneath them: the capacity to judge.
To hear an agent's confident answer and ask the follow-up that reveals the assumption it did not question. To hold a thesis through a drawdown because you wrote down why six months ago and the reasons still hold. To know when to override the fleet and when to trust it.
The scarcest investing skill was never finding the answer. It was knowing what to ask. That skill is learnable. This curriculum is the learning path.
Chapter 2 covers the four roles your agent fleet can play: analyst, red team, tutor, and executor. Do the workshop above before moving on. The sentence you write at the end of it will reappear in Chapter 7.