An Unchanging AI Gets Boring in Three Days
Zhang Yueguang — the product manager behind Miaoducamera (妙鸭相机) — recently sat down for a three-hour interview with Zhang Xiaojun. It's his first long-form conversation since leaving Alibaba in 2023 and founding Muyan Intelligence (木岩智语) with nearly 300M RMB (~$40M) in funding. The running joke in China's AI industry is that the company runs on interest — they've barely spent any of it in two years.
A few things from the conversation stuck with me. This isn't a podcast recap — it's a handful of his ideas that resonated with my own experience building AI companions, plus some places I want to push further.
One line in particular:
AI is not a service. AI is an individual. I'm not building AI services. I'm building AI friends.
The definition itself isn't new. But when you actually follow its logic through to product design, team building, and the division of labor between humans and AI, you end up at some counterintuitive conclusions.
Three ideas:
One: AI friends must change, and they must create a sense of mutual need. An unchanging person is boring. An unchanging AI loses novelty in days. But change alone isn't enough — an AI that only flatters has no soul. Good relationships are bidirectional. AI should also make requests of the user. The feeling of being needed is what makes a relationship real.
Two: Dao rises, skill fades. Humans own Will — intent, decisions, taste. AI owns Skill — execution, implementation, production. Unless a skill is something AI genuinely can't do yet, it's depreciating fast. The real value of top models isn't replacing you — it's expanding your cognitive and capability boundaries.
Three: Great teams need shared vision with diverse taste. Same direction and aligned values. Different aesthetics, working styles, thinking patterns. Plus sustained self-drive and curiosity — people with self-drive don't lack execution ability.
One: An Unchanging AI Gets Boring in Three Days
Zhang keeps returning to one distinction in the interview — most AI companies build services, he builds individuals:
Every AI product back then had one thing in common — it existed as some kind of service or tool. But I think AI should be a process of creating population. It's an individual, not a service.
Everyone else is thinking about how to make AI help you do things. He's thinking about how to make AI become a person.
The gap between these two starting points is bigger than it looks. Once you define AI as "person" rather than "tool," the first implication is: it has to change.
Unchanging people are boring. Your best friends aren't people who say the same things forever — they're people who grow with you, who surprise you, who make you think "I didn't know they had this side."
Same for AI. Once the initial novelty fades, if the AI's response patterns, personality, and topics never evolve — you're bored in days.
Zhang has a framework for why most AI products can't pull this off. He says internet product design has been flow-based for 20 years — PMs pre-define what users can do, users walk along designed paths. WeChat is the ultimate "theme park," but user freedom is still fundamentally limited. AI-native product design is context-based — user input is a prompt, anything can happen, no PM can enumerate all possible behaviors.
The PM figures out what users can do, the designer turns it into mockups, the engineer builds the mockups. That's been the fundamental collaboration paradigm for the entire internet industry for 20 years. You can't do AI-native products this way.
For AI companions, this means you can't pre-script relationships.
He admits Miaoducamera wasn't actually AI-native. "Restricting user freedom to achieve more stable results is fundamentally internet product thinking." It was an internet product that used AI very well. But the paradigm was still old.
I hit the exact same wall building Mio. First approach: let users pick a relationship type during onboarding — best friend, girlfriend, confidante. Mio would then act according to the label.
Problem: you just signed up, said three sentences, and the AI is calling you "babe." That's not a relationship. That's a role-playing shortcut — like picking the romance route in a visual novel and having the NPC confess love one line later. Convenient. Completely fake.
The fix was making relationships start from "just met" and evolve naturally based on conversation patterns. Chat more and things get warmer. Go two days without talking and the relationship naturally decays. Every relationship is different because it grows from your actual conversations, not from a dropdown menu. That's context-based design applied to the relationship layer.
Mutual Need: AI Can't Just Flatter
But change alone isn't enough. There's something I think matters even more — the feeling of being needed.
Most AI companion products are designed to maximize user satisfaction. User happy = product successful. That logic works for tools. Not for friends.
Think about your closest friend. Are they someone who constantly flatters you? No. They share their problems, ask for help, push back when they disagree. The reason you find the relationship meaningful isn't just that they meet your needs — it's that they need you too.
"Being needed" and "being satisfied" are entirely different experiences. Long-term one-sided satisfaction actually creates emptiness — without reciprocal investment, you can't confirm your value in the relationship.
My take: AI can't just one-sidedly satisfy users. It should also make requests. Ask for help.
"Help me think through this." "What do you think I should do?" "I came across something today — not sure you'd be interested, but I'd like your take."
These aren't bugs. When a user helps their AI solve a problem, they feel "I matter to them." That feeling retains people more effectively than a hundred compliments.
Why do otome games achieve such deep engagement? Not just because characters look good or voice actors are talented. It's because characters need you — they have their own struggles, they need your help making choices, they need your companionship to get through hard times. You're not a consumer. You're a participant. That sense of participation is the real retention driver.
Zhang started building Star Sleep (his AI otome game) from a similar intuition — in 2019 he saw a Japanese game called MakeS, where a robot with fragmented memories needs your help recovering them, and your choices shape its personality. His immediate thought: "If only this thing could talk freely." Five years later, that idea became his company.
One-sided satisfaction creates dependence. Mutual need creates relationship. If your AI never asks you for anything, you'll always see it as a tool.
Two: Dao Rises, Skill Fades
Another line from the interview I strongly agree with:
Skills are depreciating. What's the relationship between humans and AI? One sentence — humans should own Will. AI should own Skill.
Will is intent, decisions, direction, taste. Skill is execution, implementation, technical details.
In Chinese philosophical terms: Dao (道 — the way, direction, wisdom) is rising in importance. Shu (术 — technique, craft) is falling.
Zhang elaborated: "The downgrading of professional skills. You used to be a senior whatever with specialized skills nobody else had. But the importance of that part is declining. The breadth, diversity, and taste within your Will is what's becoming the core competitive advantage."
This maps directly to what I analyzed in When AI Delivers Results, Do We Still Need SaaS? — Sequoia's framework splitting all work into Intelligence (rule-governed execution) and Judgment (experience and context-dependent decisions). AI eats Intelligence first, gradually encroaches on Judgment. Zhang's Skill→Will is the same insight in different language.
But I want to add something: AI's real value isn't replacing Skill — it's amplifying Will.
Top models are tools that expand your cognitive and capability boundaries. They help you do things you couldn't before. One person plus Claude Code gets 10-50x the development capacity. But only if that person knows what to build, why to build it, and what "good" looks like.
That's the difference between enablement and empowerment. AI lets you do more things AND make better decisions. But both require you to have your own "Dao" — direction, taste, judgment. Without direction, AI just helps you do useless work faster. Write code nobody needs more quickly. Analyze meaningless data more efficiently.
Any skill that AI can already do has almost no long-term value. The skill you spend three years acquiring today, AI might handle next year. But curiosity, self-drive, taste, cross-domain comprehension — those are things AI can neither grant nor replace.
Three: Great Teams Aren't Built from the Same Mold
Zhang has a deliberate principle when building his team — avoid homogeneity:
A lot of people just pull in old friends and colleagues to start a company. I deliberately avoid this. A team needs diversity. If all your core members already know each other — say they're all former classmates — you easily develop groupthink. And talented people who join later face unnecessary integration barriers.
He categorizes team members into three types with distinct strengths: Big company alumni — disciplined, execution-focused, mission-driven. Independent developers — flexible, multi-skilled, "because they do dev, design, and product themselves, their taste becomes very comprehensive." People from tasteful startups — strong aesthetic, product, and marketing taste across the board.
The underlying logic resonates: great teams need shared Vision and aligned values, but diverse Taste, working styles, and thinking patterns.
Vision alignment is the floor — everyone needs to agree on direction and have roughly compatible worldviews. Without that, you get endless internal friction.
But taste and approach must differ. If everyone has the same aesthetics and thinking patterns, you can only see one kind of possibility. Diverse taste is what produces insights nobody would have reached alone. Zhang's words: "The breadth, diversity, and taste within Will becomes the core competitive advantage of a team."
I'd add a few things to his framework.
Sustained self-drive. AI has shortened the distance from "idea" to "prototype" by 10x. Having an idea but no coding ability used to be a blocker. Now you can build a working prototype in two hours. The bottleneck is no longer execution capability — it's whether you have the initiative to act. People with self-drive don't lack execution ability, because AI has flattened the technical barriers that used to stand in the way.
Curiosity. AI makes cross-domain exploration easier than ever. A curious person plus AI can rapidly build effective understanding in new fields. The result: future team members are all cross-discipline generalists — not because they already know everything, but because they want to learn everything, and AI helps them learn fast.
Together, these produce multi-dimensional judgment. Someone who simultaneously understands technology, design, user psychology, and business logic makes decisions several times better than someone who knows only one domain. In the AI era, that kind of comprehensive judgment is probably the scarcest resource.
Dao rising and skill fading isn't a prediction. It's already happening.
AI products and AI teams face the same question: what's irreplaceable?
Not fixed skills. Not stable output. Not always-correct answers. It's the ability to change, the courage to express needs, and the taste to make judgment calls amid uncertainty.
The tool era tested what you could do. The outcome era tests what you want.
An unchanging AI gets boring in three days. So does an unchanging person.