Your Agent Has Four Jobs, Not One
Chapter 2 of 10 in the AI-Native Investor curriculum.
Ask an agent "should I buy NVDA?" and you will get something back within seconds. It will be organized, coherent, include numbers, and sound confident. It will also be nearly useless — a summary of consensus opinion wrapped in a recommendation you did not earn the right to trust, because you do not know which assumptions it rests on or which ones to challenge.
The problem is not the agent. The problem is the request. "Should I buy NVDA" does not specify what kind of work you need done. It is like walking into a room of four specialists — a researcher, a critic, a teacher, and an operations manager — and saying "help me." You will get a response, but it will come from whoever happens to step forward first, and it will not be the response you actually needed.
This chapter is about learning to address the right specialist. An agent is not one tool. It is four distinct roles — analyst, red team, tutor, executor — and the way you frame your request determines which role activates. Fluency in all four, and the judgment to know which one to invoke at each stage of your research process, is the foundational fleet skill this curriculum builds on.
What this chapter covers
- Why "should I buy X" is the wrong question
- Role 1 — Analyst: the default, and its default problem
- Role 2 — Red Team: attacking your thesis before the market does
- Role 3 — Tutor: the most important role for this curriculum
- Role 4 — Executor: stable workflows at scale
- The request as a casting call — switching roles in practice
- Six things agents cannot do in any role
- The chicken-and-egg problem, and how this curriculum solves it
- Workshop: one question, four roles
1. Why "should I buy X" is the wrong question
When most people sit down with an agent to think about an investment, they ask a version of the same question: should I buy this stock? The agent obliges. It produces a balanced-sounding analysis with bull points and bear points, usually concluding with something hedged like "this stock could be a good fit for investors with a long-term horizon and moderate risk tolerance."
This output has three problems, and they are all structural — they follow from the shape of the question, not from any failure of the agent.
First, the question asks for a recommendation without specifying the criteria. Buy at what price? For what holding period? Under what assumptions about the company's future? In what portfolio context? The agent fills in all of these with defaults — usually consensus estimates, a vague medium-term horizon, and no portfolio context at all. The result is a generic answer to a question you did not actually ask.
Second, the question collapses multiple stages of work into one. A real investment decision passes through research, challenge, learning, and execution — in that order. Asking "should I buy" skips to the end. It is like asking a project team for the final deliverable on day one, before anyone has done discovery, before anyone has stress-tested the plan, before you have even defined what success looks like. You get something that has the shape of a conclusion without the substance behind it.
Third, the question trains you to accept answers rather than interrogate them. Every time you ask an agent for a recommendation and act on it, you are outsourcing the judgment this curriculum exists to build. The agent becomes an oracle instead of a tool. Oracles are comforting and useless — they absorb responsibility without reducing risk.
The fix is not asking better questions within the same mode. The fix is recognizing that investment research requires four fundamentally different kinds of work, and explicitly telling the agent which one you need.
2. Role 1 — Analyst: the default, and its default problem
When you ask an agent about a stock without specifying a role, you get the analyst. This is the role most people use, and it is the role the agent defaults to. The analyst produces research: summaries, comparisons, data aggregation, trend identification, historical context.
The analyst role is genuinely powerful. Ask it to compare the gross margin trends of three semiconductor companies over five years and explain the divergence. Ask it to pull the last four quarters of free cash flow for a company and flag where operating cash flow and net income diverge. Ask it to summarize the risk factors section of a 10-K and identify which risks are new since the previous year's filing. These are tasks that would have taken hours of manual work in 2023 and take seconds now. The output is usually accurate, well-organized, and includes context you might not have thought to look for.
The default problem is consensus. The analyst role, left unguided, produces consensus analysis — not because the agent is lazy, but because consensus is what most of its training data reflects. Analyst reports, financial commentary, earnings call summaries — the corpus is dominated by consensus views. When the agent summarizes a company, it summarizes what most analysts think about that company. When it identifies trends, it identifies the trends most commentators have already identified.
This is not a flaw you can prompt your way out of. It is a structural feature of how these models work. The training data is the water the fish swims in. You can ask the agent to be contrarian, and it will produce something that looks contrarian — but the contrarian view it produces will usually be the most popular contrarian view, which is itself a form of consensus.
When the analyst role is the right choice
The analyst role is the right choice when you need information aggregation, not judgment. It excels at tasks where the value is in the gathering and organizing, not in the interpreting.
Specific examples: pulling financial data across multiple periods or companies. Summarizing the contents of a long document (a 10-K, an earnings call transcript, an industry report). Identifying factual changes between two time periods — new risks disclosed, new segments reported, new management appointments. Computing ratios, percentages, and growth rates from raw numbers.
In all of these cases, you are asking the agent to do work that has a verifiable right answer. The gross margin either went up or it did not. The debt maturity schedule either shows a wall in 2027 or it does not. The CEO either mentioned pricing pressure on the last earnings call or they did not. This is where the analyst role earns its keep — compressing hours of manual data work into seconds.
When the analyst role is the wrong choice
The analyst role is the wrong choice when you need challenge, understanding, or conviction. Using the analyst for everything is like running a project where the entire team does research and nobody does quality assurance, nobody teaches the new team members, and nobody pushes back on the assumptions. The research might be thorough. The conclusions will still be weak.
Specifically: do not use the analyst role to decide whether a thesis is correct. Do not use it to understand a concept you are unclear on (you will get a summary, not comprehension). Do not use it to find holes in your own reasoning — it will politely agree with whatever framing you provide and find supporting evidence. These tasks require the other three roles.
3. Role 2 — Red Team: attacking your thesis before the market does
The red team role exists for one purpose: to find the weaknesses in your thinking before you put money behind it. You invoke it by framing your request as an adversarial challenge rather than a research task.
The difference in framing is concrete. Instead of asking "what are the risks of investing in this company?" — which activates the analyst and produces a balanced list of risks alongside opportunities — you say something like: "I believe this company's revenue will grow at 15% annually for the next five years because of X, Y, and Z. You are a short seller who thinks I'm wrong. Build the strongest possible case that my thesis will fail. Assume I'm the mark."
The word choice matters. "Short seller" is not decoration. It tells the agent to adopt a specific adversarial posture — one where the goal is not balance but attack. The output shifts from "here are some risks to consider" to "here is why your specific assumptions are wrong, with evidence." The quality difference is significant.
What good red-teaming looks like
A good red team response targets your specific assumptions, not generic risks. If your thesis rests on revenue growth, the red team should attack the revenue growth assumption with specific counterevidence — customer concentration risk, competitive entry, TAM saturation, historical precedent of similar growth rates in the sector proving unsustainable. If your thesis rests on margin expansion, the attack should focus on what would prevent margin expansion — input cost pressure, pricing competition, R&D spending requirements.
The mark of useful red-teaming is that it makes you uncomfortable. If the red team output feels easy to dismiss, either your thesis is genuinely strong (rare for a first draft) or the red team is not attacking hard enough. Push it. "That attack was weak. What would a sophisticated short seller who has been following this company for five years say? What would they know that I'm missing?"
The iterate-until-silence pattern
The most valuable red team pattern is iterative. You present your thesis. The red team attacks it. You revise the thesis to address the strongest attacks. The red team attacks the revised version. You revise again. You keep going until the attacks stop being substantive — until the red team is recycling weak objections because the strong ones have been addressed.
This usually takes three to five rounds. The thesis that emerges from this process is materially different from, and materially stronger than, the thesis you started with. Assumptions you were not conscious of become explicit. Risks you had not considered become part of your monitoring framework. The thesis becomes something you can defend, not just something you believe.
The trade-off
Red-teaming with an agent has a ceiling, and it is worth being honest about where that ceiling is.
The agent attacks your thesis using information and reasoning patterns from its training data. It will find the arguments that a well-read generalist would find. It will not find the argument that requires deep sector-specific knowledge — the kind of knowledge a specialized short seller who has followed the industry for a decade would have. If the fatal flaw in your thesis is something only an expert would know (a regulatory change brewing in committee, a supply chain dependency that does not show up in public filings, a cultural dynamic inside the company that explains the management turnover), the agent will miss it.
This means red-teaming with agents is necessary but not sufficient. It catches the arguments you should have thought of. It does not catch the arguments you could not have thought of. The former is most of the value — most theses fail on obvious problems their authors were too close to see. But for high-conviction positions with real money behind them, agent red-teaming should be supplemented with human conversation, industry research, and the kind of primary investigation that agents cannot conduct.
4. Role 3 — Tutor: the most important role for this curriculum
The tutor role is what separates using an agent as a crutch from using an agent as a training tool. In tutor mode, the agent does not produce research or attack your thesis. It teaches you the concepts you need to evaluate the research and the attacks yourself.
You invoke the tutor role by framing your request around your own understanding rather than around a company or a thesis. Instead of "explain free cash flow" — which activates the analyst and produces a textbook summary — you say: "I'm looking at a company whose operating cash flow is $2 billion but whose free cash flow is only $400 million. I don't understand what's absorbing the difference. Walk me through what would cause that gap, why it matters, and how I should think about whether the gap is a problem or just a feature of this business."
The difference is that the tutor is responding to your specific confusion, not producing a generic explanation. The output is calibrated to what you do not know, which makes it dramatically more useful than a summary.
The explain-back loop
The most powerful use of the tutor role is the explain-back loop. After the agent explains a concept, you explain it back in your own words. Then you ask the agent to find the gap in your explanation. This sounds simple. It is the single most effective learning technique available when working with agents, and almost nobody does it.
Here is why it works. When you read an explanation, you feel like you understand it. This feeling is unreliable — it conflates recognition ("I followed the words") with comprehension ("I could reconstruct the reasoning from scratch"). When you explain it back, the gaps between recognition and comprehension become immediately visible. The agent, playing tutor, can identify precisely where your reconstruction diverges from the underlying logic.
A practical example. You are working through Chapter 5 on valuation and encounter the concept of terminal value in a DCF. The agent explains it. You explain it back: "Terminal value is the value of all cash flows after the explicit forecast period, and it's usually calculated using a perpetual growth rate." The agent, in tutor mode, might respond: "Your definition is mechanically correct but missing why it matters. Terminal value typically accounts for 60-80% of the total DCF value, which means the perpetual growth rate assumption — a single number you have to choose — is driving most of the valuation. Can you explain what happens to the valuation if you change that growth rate by one percentage point?"
That follow-up question is worth more than the original explanation. It moves you from "I know the definition" to "I understand the sensitivity" — and sensitivity is what you actually need when you are evaluating an agent's DCF output.
When to invoke the tutor
The tutor role should be your default when you are working through this curriculum. Chapters 3 through 6 — first principles, reading companies, valuation, and risk — are dense. They cover concepts that take professional analysts years to internalize. Reading them once will not be enough. The tutor role is how you close the gap between reading a concept and being able to apply it.
Specific triggers for switching to tutor mode:
- You encounter a term you could define but could not explain to a friend without jargon.
- You read a section and agreed with it but could not reconstruct the argument from memory.
- The agent (in analyst mode) produced output that uses a concept you recognized but could not evaluate — you know what a P/E ratio is, but you could not explain why the same P/E might be cheap for one company and expensive for another.
- You disagree with something but cannot articulate why precisely enough to argue it.
In all four cases, the instinct is to move on. The correct move is to stop, switch to tutor mode, and close the gap before it compounds. Gaps in foundational understanding do not stay small. They become the hidden assumptions in every subsequent analysis.
The trade-off
The tutor role's limitation is that it cannot teach you pattern recognition. It can explain what free cash flow conversion means, how to calculate it, and why it matters. It cannot give you the intuitive sense — built over dozens of company analyses — that tells an experienced investor "this company's FCF conversion looks wrong, something is being hidden in working capital." That intuition comes from repetition, not explanation.
The tutor accelerates the early stages of learning dramatically. It compresses what used to take weeks of textbook reading into hours of targeted interaction. But it does not eliminate the need for practice — the workshops in each chapter exist specifically because understanding a concept is not the same as being able to apply it under uncertainty.
Charlie Munger's concept of a "latticework of mental models" is relevant here. Munger's argument was that real understanding comes not from knowing individual concepts but from having enough of them — from enough different disciplines — that they reinforce each other. When you understand accounting, competitive strategy, human psychology, and probability theory, each one illuminates the others. The tutor role is exceptionally good at building individual strands of the lattice. The connections between strands — the moments where a risk concept from Chapter 6 suddenly clarifies something in a company's balance sheet from Chapter 4 — those connections happen through practice and reflection, not through tutoring.
5. Role 4 — Executor: stable workflows at scale
The executor role is the one most people start with, and the one that matters least until the other three have done their work.
In executor mode, the agent runs a defined workflow — pulling data, formatting output, computing metrics, generating reports — without requiring judgment calls during the process. The value of the executor is consistency and speed. Once you know what analysis you want, the executor runs it across ten companies, or across ten quarters, or on a daily schedule, without variation.
You invoke the executor by giving the agent a complete specification. Not "research this company" (analyst) or "attack this thesis" (red team) or "teach me about this concept" (tutor), but "pull the last eight quarters of revenue, operating margin, and free cash flow for these five companies. Compute year-over-year growth rates. Flag any quarter where margin compressed more than 200 basis points. Present in a comparison table."
The specification leaves no room for judgment. Every step is defined. The agent's job is to follow the instructions accurately and quickly.
Where the executor role earns its value
The executor shines in three specific situations.
Screening and filtering. You have a set of criteria — minimum market cap, minimum free cash flow yield, maximum debt-to-equity ratio — and you want to identify which companies in a sector pass the screen. The executor runs this mechanically, which is exactly what you want. You do not need the agent to interpret the results. You need it to filter.
Recurring monitoring. You hold a position and want to track specific metrics quarterly — revenue growth relative to guidance, margin trajectory, insider transactions, changes in institutional ownership. The executor pulls the data each time with the same format, making period-over-period comparison trivial. This is the investment equivalent of a project status dashboard — the value is in the consistency of the format, not in any single data point.
Standardized comparison. You want to compare three companies on the same dimensions — same metrics, same time period, same format. The executor ensures the comparison is apples-to-apples, which is surprisingly hard to do manually when companies have different fiscal years, different segment definitions, and different reporting conventions.
The premature-executor trap
The trap is using the executor before you know what workflow to execute. This happens more often than it should. Someone reads about investment screening, asks the agent to run a value screen (low P/E, high dividend yield, low debt), gets a list of twenty companies, and starts "researching" them — which usually means asking the agent, in analyst mode, to produce summaries of each.
The problem is that the screen itself embodies assumptions the person has not examined. Why these criteria? Why these thresholds? What kinds of companies does this screen systematically miss? What kinds of value traps does it systematically include? These are judgment questions that belong to the analyst and red team roles. Running a screen before answering them produces a list that feels like progress but is actually noise with structure.
The executor is the last role in the chain, not the first. The sequence that works is: tutor (learn the concepts), analyst (research the landscape), red team (challenge your assumptions), executor (run the workflow you have designed and validated). Inverting this order — executing before understanding — is the most common mistake in agent-assisted investing, and it produces the most expensive kind of confidence: confidence backed by a systematic process you do not understand.
6. The request as a casting call — switching roles in practice
The four roles are not four different agents. They are four modes of the same agent, activated by how you frame the request. Think of each request as a casting call — the words you use determine which specialist shows up.
The shift between roles happens in a single sentence. Here are the transitions for the same underlying question — "does this company have a durable competitive advantage?" — framed for each role.
Analyst framing: "Analyze the competitive position of [company]. Identify the sources of its competitive advantage, how durable they appear based on the last five years of financial data, and how they compare to the two closest competitors." This produces research. It will be thorough, organized, and consensus-aligned. Use it when you need the factual landscape.
Red team framing: "I believe [company] has a durable competitive moat based on [specific reasons]. You are a competing firm's strategy team trying to erode that moat. What are the three most realistic attack vectors, and what evidence from the last three years suggests any of them are already working?" This produces challenge. It will be uncomfortable and specific. Use it when you need your assumptions tested.
Tutor framing: "I keep reading that [company] has a 'moat' but I'm not sure I understand what makes a moat durable versus temporary. Using this company as a case study, walk me through how to distinguish the two. Then test me — give me a scenario and ask me whether the moat holds." This produces learning. It will be interactive and calibrated to your level. Use it when you need to build the understanding required to evaluate the other roles' output.
Executor framing: "For [company] and its two closest competitors, pull the last five years of gross margin, operating margin, R&D as a percentage of revenue, and customer retention rate if available. Format as a comparison table with year-over-year changes highlighted." This produces structured data. It will be consistent, comparable, and interpretation-free. Use it when you already know what to look at and need the data organized.
The meta-skill: knowing which role to invoke
The four framings above are not interchangeable. Each produces genuinely different output, and using the wrong one at the wrong time wastes your time or — worse — gives you false confidence.
The decision of which role to invoke at any given moment is itself a judgment call. A rough heuristic:
- If you are early in researching a company and do not yet have a view, start with analyst.
- If you have a view and it feels solid, switch to red team immediately. The more solid it feels, the more you need the red team. Conviction without challenge is the most dangerous state in investing.
- If the analyst or red team produced output that uses concepts or frameworks you do not fully understand, stop and switch to tutor before continuing.
- If you have a validated workflow that you want to repeat across companies or time periods, use executor.
This heuristic will evolve as your judgment develops. By the time you reach the capstone in Chapter 9, you will have enough practice with each role that the transitions become intuitive. For now, the conscious act of choosing a role before each request is the skill to build.
Combining roles in a session
Real research sessions involve multiple role switches. A typical pattern for investigating a company might look like this:
Start in analyst mode — get the landscape. What does this company do, what are its financial trends, who are the competitors? Spend fifteen minutes reading and marking the output.
Switch to tutor mode for anything in the analyst output that you did not fully understand. If the analyst mentioned "negative working capital cycle" as a competitive advantage and you are not sure why that matters, stop and learn it now.
Form a preliminary thesis based on what you have learned. Write it down in two to three sentences.
Switch to red team mode. Present your thesis and let the agent attack it. Revise. Attack again. Continue until the attacks become repetitive.
Switch to executor mode to pull specific data that settles any factual disputes from the red team round — "the red team claimed customer concentration is a risk; pull the revenue breakdown by customer segment for the last three years so I can see."
This is not a rigid formula. Some research sessions will spend most of the time in tutor mode because the domain is unfamiliar. Others will spend most of the time in red team mode because the thesis is strong and needs stress-testing. The point is that each switch is deliberate — you are choosing which specialist to consult, not hoping the agent guesses correctly.
7. Six things agents cannot do in any role
The four roles cover a great deal of the investment research process. But there are six capabilities that no agent possesses in any role, and mistaking the agent's output for any of them is where real money gets lost.
1. Narrative judgment
An agent can summarize a company's story. It cannot tell you whether the story is true. "Management says they are transitioning to a subscription model that will improve margins" — the agent can report this, compute what the margins would look like under various adoption rates, and compare it to other companies that have made similar transitions. What it cannot do is judge whether this particular management team, in this particular competitive environment, with this particular organizational culture, will actually execute the transition. That judgment requires integrating information that is not in any filing or transcript — the tone of management's voice on an earnings call, the pattern of executive departures, the gap between what the company says and what its hiring patterns suggest.
Warren Buffett's concept of investing in "honest and able management" captures this. Honesty and ability are not numbers. They are judgments formed over years of observation. The agent can surface data relevant to the judgment. It cannot make the judgment for you.
2. Conviction under drawdown
Your position drops 30%. The thesis has not changed — the market is reacting to sector-wide selling, not to anything specific to your company. Do you hold? Do you add? Do you sell to stop the bleeding?
No agent can answer this for you, because the answer depends on something the agent does not know: your psychological relationship with money. A 30% drawdown on a position that represents 2% of your portfolio is annoying. The same drawdown on a position that represents 40% of your portfolio is life-altering. The agent does not know your portfolio composition, your financial obligations, your time horizon, or your emotional breaking point. Even if you told it all of these things, it cannot simulate the feeling of watching real money evaporate and the pressure to do something — anything — to make the feeling stop.
Conviction under drawdown is built through experience, not analysis. The workshops in this curriculum — particularly the thesis discipline work in Chapter 7 — give you the tools to make better decisions under pressure. But the decision is always yours.
3. Sizing to your psychology
Position sizing — how much of your portfolio to put into any single position — is partially mathematical and partially psychological. The mathematical part (which Chapter 6 covers through the Kelly criterion and its conservative variants) the agent can help with. The psychological part it cannot.
The question is not "what is the mathematically optimal position size?" It is "what is the largest position I can hold without it affecting my sleep, my judgment, or my ability to think clearly about the rest of my portfolio?" This varies enormously between people with identical financial circumstances. A person with a stable salary and no debt can tolerate different position sizes than a person with the same net worth but irregular income and a mortgage. The agent sees none of this.
4. Distinguishing paradigm shifts from noise
Markets are noisy. On any given day, there are dozens of narratives competing for attention — new regulations, geopolitical events, earnings surprises, analyst upgrades and downgrades, viral social media posts. Most of this is noise. Occasionally, something in the noise is a genuine paradigm shift — a structural change in the industry that makes old frameworks obsolete.
The agent cannot reliably tell which is which. This is not a temporary limitation that will be solved with more training data. Paradigm shifts are, by definition, unprecedented. The agent's pattern-matching is calibrated to historical precedent. When something genuinely new happens — something that does not match any historical pattern — the agent will pattern-match it to the closest historical precedent, which may be exactly wrong.
A practical example: when a new technology threatens to disrupt an established industry, the agent will compare it to previous disruptions. But the comparison only works if the new disruption follows the same dynamics as the historical ones. If the underlying economics are different — if the new technology has cost curves or network effects or regulatory dynamics that previous disruptions did not — the historical comparison is actively misleading.
5. Defining your circle of competence
Peter Lynch's "invest in what you understand" is simple to state and extraordinarily difficult to implement honestly. The difficulty is not in understanding your areas of strength. It is in admitting your areas of ignorance — especially when a promising investment sits in a domain you know just enough about to feel confident but not enough to be competent.
The agent will happily analyze any company in any sector for you. It will not tell you that your understanding of the sector is too shallow for the analysis to be useful. You have to draw that boundary yourself. The tutor role can help you learn enough about an unfamiliar sector to decide whether to include it in your circle. But the decision to include or exclude — and the discipline to stay out of sectors where your understanding is shallow despite the opportunity looking attractive — is yours.
6. The discipline to do nothing
In most market environments, the correct action for most individual investors on most days is: nothing. Do not buy, do not sell, do not adjust. The thesis has not changed. The position sizes are appropriate. The monitoring metrics are within expected ranges. There is nothing to do.
This is psychologically very difficult, because doing nothing feels passive and the market constantly presents apparent opportunities. The agent, if asked, will always find something to analyze, something to trade, something to adjust. It is responsive to your requests, and you will always have requests when you are bored or anxious.
The discipline to close the agent, step away, and wait is not a skill the agent can teach or support. It is a skill you build by writing your thesis, defining your kill criteria, setting your review dates — all practices from Chapter 7 — and then honoring them. The agent's job is to help you think clearly when it is time to think. Your job is to recognize when it is not time to think.
8. The chicken-and-egg problem, and how this curriculum solves it
The four roles create a dependency that is worth naming directly: you need domain knowledge to know which role to invoke, but you need the roles to acquire domain knowledge efficiently. If you do not understand what free cash flow is, you do not know that the analyst's output about free cash flow needs to be questioned. If you cannot question the analyst's output, you cannot direct the red team to attack the right assumptions. If you do not know what to ask the tutor, the tutor cannot close the gap.
This is the chicken-and-egg problem of agent-assisted investing, and it is the reason most people default to the analyst role and stay there. The analyst requires the least domain knowledge to invoke. "Tell me about this company" does not require you to know what to ask. The cost is that you cannot evaluate the answer.
This curriculum solves the problem by interleaving the two tracks — investment fundamentals and agent fleet command — across ten chapters, deliberately sequenced.
Chapter 1 (which you just read) establishes the framework: agents handle analysis, you handle judgment, four skills need to be built. This chapter — Chapter 2 — gives you the four roles and enough awareness of each to start using them consciously. Chapters 3 through 6 build the domain knowledge — first principles, reading companies, valuation, risk — that makes you progressively more effective at invoking the right role and evaluating its output. Chapter 7 synthesizes the domain knowledge into thesis discipline. Chapter 8 formalizes the fleet patterns you have been practicing informally since Chapter 2. Chapter 9 puts everything together in a capstone where you run a full fleet end-to-end on real companies.
The sequencing matters. After this chapter, when you begin working through Chapter 3 on first principles, you will know to use the tutor role when a concept is unclear, the analyst role to pull supporting data, and the red team role to test your understanding. You will not be expert at any of these roles yet. Expertise comes from doing them repeatedly across Chapters 3 through 8, with each chapter adding domain knowledge that makes the roles more effective.
By the time you reach the capstone, the role transitions should feel natural — not because you memorized a framework, but because you have practiced each role dozens of times on real investment questions.
The trade-off
This interleaved approach has a cost. In the early chapters, you will invoke roles imperfectly. You will ask the red team to attack assumptions without fully understanding those assumptions. You will use the tutor to learn concepts and retain only a fraction. You will run the executor on workflows that are not yet well-designed.
This is expected, and it is not a failure of the curriculum. Learning any complex skill involves a period of conscious incompetence — you know the roles exist, you can invoke them, but you are not yet good at it. The alternative — teaching all the fundamentals first and all the fleet skills second — would mean spending five chapters unable to practice the fleet skills at all, which is worse.
The messy, imperfect practice of the early chapters is the foundation that the later chapters refine.
Workshop — One question, four roles
Time: 45–60 minutes. Tools: Any AI agent (Claude, GPT, Gemini — any model capable of extended conversation). Output: A saved document with four distinct outputs and a reflection paragraph.
Instructions
Step 1 — Pick a company. Use the same company from the Chapter 1 workshop, if possible. Continuity matters — you are building a file on this company that will grow through the curriculum. If you did not do the Chapter 1 workshop, pick a company whose product you use regularly and whose business model you could explain to a friend in three sentences.
Step 2 — Frame the question. The question for all four roles is the same: "Does [company] have a durable competitive advantage?" You will ask this question four times, each time framed for a different role.
Step 3 — Run the analyst. Frame: "Analyze the competitive position of [company]. What are its sources of competitive advantage? How have they held up over the last five years based on financial data? How do they compare to the two closest competitors?" Save the output. Read it. Highlight the three most important claims.
Step 4 — Run the red team. Frame: "I believe [company] has a durable competitive advantage because [use the three claims you just highlighted]. You are a short seller building a case that this advantage is eroding. Give me the three strongest attack vectors with specific evidence." Save the output. Read it. Mark which attacks you can dismiss and which ones concern you.
Step 5 — Run the tutor. Frame: "The analyst said [company] has [specific advantage]. The short seller said [specific attack]. I'm not sure I understand enough about [the concept in dispute — might be network effects, switching costs, scale economies, or something else] to judge who's right. Teach me enough to evaluate this specific disagreement, then test my understanding." Interact until you feel you can take a side. Save the exchange.
Step 6 — Run the executor. Frame: "Pull the last five years of [relevant metrics — gross margin, customer retention, revenue concentration, or whatever the dispute depends on] for [company] and its two closest competitors. Present as a comparison table." Save the output. Use it to settle the factual question at the center of the dispute.
Step 7 — Reflect. Write one paragraph (four to six sentences) answering:
- Which role produced the output that most changed how you think about this company?
- Which role do you suspect you will under-use going forward, and why?
Save the entire document with the company name and today's date. You will return to it.
Why this matters
The purpose of this workshop is not to reach a conclusion about the company. It is to experience the difference between the four roles on the same question — to feel, in practice, how the same underlying question produces fundamentally different outputs depending on which role you invoke. The reflection paragraph forces you to notice which role shifted your thinking, which is the first step toward using all four habitually rather than defaulting to the analyst.
Which specialist is missing from your room?
The four roles are not a theoretical framework. They are a practical tool you will use in every chapter that follows. By the time you reach Chapter 8, you will combine them into multi-step workflows where each role hands off to the next. By Chapter 9, you will run all four on three companies and produce a defensible investment decision.
But the foundation is simpler than the architecture. It is the habit, before each request, of asking yourself: what kind of work do I actually need done right now?
Most people, most of the time, need the work they are not requesting. The person who asks for more research usually needs more challenge. The person who asks for challenge usually needs more understanding. The person who asks for understanding usually needs to execute what they already know. And the person who is executing usually needs to stop and ask whether the workflow they are executing is answering the right question.
The agent does whatever you ask. The hard part was never getting an answer. It was learning to hear which question you are not asking.
Chapter 3 begins the investment fundamentals track: what a stock actually is, three sources of return, and why markets are hard to beat. Bring the tutor role — you will need it.