The Mom Test: Customer Interviews That Don't Lie to You
Rob Fitzpatrick’s The Mom Test (2013) is the shortest, sharpest book on customer discovery in print. The premise is in the title: anyone — even your mother — will tell you your idea is wonderful if you ask badly. The book is the playbook for asking well enough that the feedback is actually useful.
Thirteen years later, it matters more than when it was written. Not because customer behaviour has changed — people still lie politely to anyone pitching them an idea — but because the economic cost of acting on bad feedback has collapsed. In 2013 a misguided product took six months and a team of engineers to build; the cost alone created some natural discipline. In 2026 a team can ship a polished-looking product in a weekend. The discipline has to come from somewhere else, and Fitzpatrick’s rules are the cheapest source available.
The book is compact — three rules, a handful of examples, and a chapter on commitment — and every early-stage PM should read the whole thing. This article summarises the headline rules, extends them into the AI era, and describes how boards and PE diligence teams use customer-interview quality as a proxy for team maturity.
The Mom Test is Rob Fitzpatrick’s 2013 playbook for customer discovery interviews that produce useful evidence rather than polite validation. Its three rules: (1) talk about their life instead of your idea; (2) ask about specifics in the past instead of generics or opinions about the future; (3) talk less and listen more. The book’s central insight is that opinions are worthless and past behaviour predicts — so a Mom Test interview looks like a conversation about the customer’s life and current workarounds, not a pitch dressed up as a survey.
TL;DR: I’ve watched more badly-run customer interviews than I can count, but the sharpest diagnostic isn’t even about the interview technique — it’s whether the interview is feeding a committed outcome. When a team runs customer interviews without a business case behind the bet, no matter how well-run the interviews are, the output is always research-flavoured theatre. Nobody knows which insights are important because nobody has signed up for a revenue line those insights are supposed to move.
The specific in-the-room tell I look for in PE reviews is whether the team can show me their notes from the last ten interviews and tell me which committed outcome each interview was informing. If the notes contain feature requests and positive adjectives — and the team can’t name the revenue line, the TAM/SAM sizing, or the customer segment they’re hunting — both ends of the system have failed. If the notes contain specific dates, amounts of money spent on workarounds, and quoted customer language, and there’s a named business owner with a committed number attached, that’s a team actually learning in service of an actual bet. The Mom Test fixes the technique. The business case fixes the theatre problem that otherwise sits upstream of it.
Fitzpatrick’s Three Rules
These are worth quoting verbatim. Fitzpatrick states them in slightly different forms across the book; the canonical short form is:
- Talk about their life instead of your idea.
- Ask about specifics in the past instead of generics or opinions about the future.
- Talk less and listen more.
They sound obvious. They are obvious. They are also very, very hard to follow in practice, because every instinct a founder has pulls in the opposite direction. You want to pitch. You want validation. You want the conversation to end with the customer saying your idea is amazing. All three impulses produce worthless evidence.
The discipline is to treat the interview as reconnaissance, not sales. You’re there to learn what their life is like, what problems they encounter, what they’ve already paid to solve them, and what it would take for them to change behaviour. You are not there to determine whether they’ll buy your product. You can’t, because they don’t know yet, and their guess is unreliable.
Good Questions vs Bad Questions
The fastest way to internalise the Mom Test is to see bad questions next to their better versions.
| ❌ Bad (hypothetical, leading, opinion-seeking) | ✅ Better (past behaviour, specific, open) |
|---|---|
| “Do you think it’s a good idea?” | “Walk me through the last time this came up.” |
| “Would you buy a product that did X?” | “What have you already tried to fix this?” |
| “How much would you pay for X?” | “How much does this problem currently cost you each month, in time or money?” |
| “Don’t you hate it when Y happens?” | “What was the last thing that made this week harder?” |
| “Would you use a tool that automatically did Z?” | “What did you do the last time Z came up? Walk me through it.” |
| “We’re building something for [problem]. Interested?” | “What’s your current process when [problem] happens?” |
| “Would your boss approve this?” | “Who else has to be involved when you change tooling? Walk me through the last time you bought something like this.” |
| “If we built X, would it help?” | “How have you tried to solve this before? What worked, what didn’t?” |
The pattern: the left column produces opinions about hypothetical futures, with optional flattery. The right column produces artefacts — specific past events, dates, amounts, behaviour sequences. Artefacts can be reasoned about. Opinions cannot.
Compliments, Fluff, and Feature Requests: The Three Sources of Bad Data
Fitzpatrick identifies three categories of data that feel useful and aren’t:
- Compliments. “That’s a great idea!” “I love it!” “My friend would definitely use this.” Worthless. A polite customer will always give compliments; an impolite one will not volunteer useful criticism unprompted. Neither is buying signal. Fitzpatrick calls them “shiny, distracting, and worthless.”
- Fluff. Generic opinions, hypotheticals, future-tense claims. “I’d probably use it weekly.” “I usually try to keep our data clean.” “If you had that feature, I could see myself using it.” These feel substantive but forecast nothing. A customer who says they’d probably use a product weekly is almost never the customer who actually does.
- Feature requests from non-engineers. When customers describe the problem, listen carefully. When they describe the solution — “you should add a bulk-edit button” — treat it as data about the underlying problem, not as a roadmap item. The customer’s solution is their hypothesis about what would help them. Your job is to understand why they hypothesised that, not to build what they said.
Good interview notes contain none of the three, or (more realistically) contain them flagged as noise. Bad interview notes are 80% compliments and feature requests and 20% actual observations about the customer’s life.
Commitment and Advancement
The single most useful operational concept in the book is commitment and advancement. At the end of a customer conversation, Fitzpatrick asks: did the customer give up something they value? And did the conversation move forward in a way that would be visible in a real sales funnel?
Commitment is the customer giving up something of theirs:
- Time (another, longer call; sitting through a demo; doing a pilot)
- Reputation (introducing you to their boss or peers; inviting you to speak to their team)
- Money (a pre-order, a deposit, a small paid pilot, an LOI with a procurement path)
Each tier is more valuable than the last. A customer giving up ten minutes to have the call was fine. A customer giving up an hour of their VP’s calendar is meaningful. A customer cutting a £5k purchase order is decisive.
Advancement is the customer moving forward in a recognisable real-world funnel — not the vague forward motion of “they seemed enthusiastic”, but a concrete next step: the discovery call to the stakeholder demo to the legal review to the pilot. If the conversation ended without either commitment or advancement, the meeting failed regardless of how warm it felt.
Most founders leave customer conversations with a glow and no commitment. The glow is compliment data. The absence of commitment is the real signal.
The 2026 Reframe: Mom Test Matters More Now
Pre-AI, the binding constraint on shipping a product was feasibility — could we build it? That framed the learning loop. If building was expensive, interviews were pre-spend due diligence. They mattered, but the build itself provided a second validation loop; users who got the thing they asked for either used it or didn’t, and the feedback loop closed.
In 2026 that second loop has broken. Building is nearly free, which means teams ship products without doing the upstream interview work — and then have no framework for understanding why nobody adopts. The support, sales, and opportunity-cost consequences of ten unvalidated products are much worse than the build cost was; the old economic filter is gone; only the discipline of upstream customer conversations remains.
Mom Test discipline is therefore not a nice-to-have. It is the load-bearing discipline in AI-era product work. The team that runs twenty high-quality Mom Test interviews a quarter outlearns the team that ships twenty polished prototypes in the same quarter — because the first team knows which assumption to kill, and the second team has to infer it from noisy product-usage data that doesn’t come, about a product nobody wanted.
This threads with the AI-era lifecycle reframe and the whole early-stage validation cluster : AI collapsed build cost; sell cost is unchanged; customer conversations are the cheapest instrument for learning what will actually sell.
The PE / NED Diagnostic: Reading Interview Quality from Outside
From a board or diligence seat you can tell a lot about a team’s discovery maturity by asking to see their last ten customer interview notes (or just to sit in on one). Here’s what separates strong teams from weak ones.
Strong teams’ notes contain:
- Direct quotes from the customer in the customer’s language
- Specific past events with dates, amounts, and sequences
- References to what they’ve already tried and why it failed
- Commitment artefacts (“agreed to a one-hour follow-up with the CTO”, “pre-ordered for £500”)
- Recognised segment patterns across multiple interviews
- Clear separation between what the customer said, what the interviewer inferred, and what the team now hypothesises
Weak teams’ notes contain:
- Adjectives the customer used about the product (“excited”, “keen”, “loved it”)
- Feature requests listed as if they were roadmap items
- Hypothetical forecasts (“thinks they’d probably use it daily”)
- No commitment or advancement recorded
- The interviewer’s words significantly outnumbering the customer’s
- A concluding slide that’s effectively a demo script
A team whose interview notes are mostly compliments and feature requests is running a pitch loop, not a learning loop. A team whose notes are mostly artefacts and quotes is running a learning loop. The first will plateau; the second will find something. I’ve stopped taking the first kind of team seriously in early-stage reviews.
Mom Test and Problem-Solution Fit
The Mom Test is the operational technique that makes problem-solution fit work achievable. PSF requires evidence of a real, painful problem and a credibly-solving prototype. That evidence has to come from customer conversations, and the conversations have to be Mom-Test-grade or the evidence is theatre.
Operationally: run 20 Mom Test interviews with your target segment. Count how many of them name the specific problem unprompted. Count how many have paid for workarounds. Count how many would give you advancement (agreeing to a follow-up with a stakeholder, pre-ordering, piloting). If the count on each is low, you don’t have PSF — regardless of how encouraging the conversations felt. The interviewers who forget to measure these are the ones who end up claiming PSF three months later with nothing to show for it.
Interview Theatre and the Business Case Upstream
Even the best-run Mom Test interview produces worthless output if the underlying bet has no business case . I keep coming back to this point because it is the single most reliable tipper that discovery work is theatre. A team conducting disciplined interviews in a market that nobody has sized, for a customer segment nobody has defined, against a revenue line nobody has committed to deliver, is doing very good craft work on the wrong problem.
The specific pattern: a PM runs twenty clean Mom Test interviews, produces beautifully-synthesised notes, identifies five unmet needs — and cannot answer the board question “which of these unmet needs is worth more than £Xm in the next two years, to which segment, by when, and who will deliver it?”. The interviews were fine. The upstream rigour was missing. The organisation will act on those interviews with a new project, which will become another beautifully-researched bet with no committed outcome, and the cycle repeats until someone notices nothing shipped has actually produced revenue.
The fix isn’t more interviews. It’s a proper business case, sized and committed, before the interviews — with the interviews then testing the specific assumptions that business case depends on. Discovery in service of a commitment is how you sharpen the bet. Discovery in advance of any commitment is how you give the organisation the feeling of learning without the discipline of outcome ownership.
Products, Not Companies: Mom Test Per Product
An often-missed point: every new product a mature company launches needs its own Mom Test cycle. The fact that your existing product has customers doesn’t transfer to your new product’s prospects. A Series C SaaS company shipping its third product line still needs to interview prospects for that product as if the company were a pre-seed start-up, because the buying committee, the budget line, and the problem context are entirely different.
This is the pattern that kills line extensions. The existing product team assumes they understand the customer. They don’t — they understand the current customer for the current product. The new product’s target may be the same logo but a different department, a different budget holder, a different decision process. Fresh Mom Test work per product is the discipline that catches these differences; most organisations skip it because “we already know our customers”, and most line extensions fail.
The Side-of-Desk Anti-Pattern in Interview Work
Twenty high-quality Mom Test interviews is 30–40 hours of work, once you include scheduling, preparation, running the conversations, and synthesising notes. Teams that don’t have dedicated capacity for this work fit it in between delivery tasks and produce exactly the low-quality interview notes the previous section described. The interviewer is in a hurry, the synthesis is done at 10pm after the sprint retro, and the discipline degrades.
The pattern this blog has argued for repeatedly: a dedicated minimum viable team for each early-stage bet. Two engineers and a product person, with explicit discovery capacity in the first 90 days — an interview target (say 30 interviews in six weeks), clear synthesis artefacts, and WIP limits so nobody pulls them onto Run fires mid-conversation. Measure them on the quality of the discovery output, not the quantity of feature velocity. See outcome-based roadmaps for the measurement discipline and allocating discovery capacity for the allocation side.
Mom Test in the Cagan Operating Model
Marty Cagan’s empowered product team model presupposes teams that conduct their own discovery. The Mom Test is one of the practical tools that make empowerment safe — a team trusted to discover, but equipped with an interviewing discipline that produces decision-grade evidence, is the combination that generates good products. Empowerment without discovery discipline is worse than feature teams, because at least feature teams have someone else doing the research. Empowerment with Mom Test discipline is how product discovery becomes real.
Dual-track agile is the structural home: the discovery track is where interviews happen; the delivery track is where the evidence turns into product. Teams without a dual track run interviews when they have time — which is never.
The Bad-Salesperson Connection
A boss of mine used to say: “A bad salesperson will ALWAYS ask for more features. A good salesperson will sell what they have.” The Mom Test makes the same point from the discovery side. A bad interviewer will always come back with more features to build. A good interviewer will come back with a clearer understanding of the customer’s life, which often lets the team sell what they already have to a better-chosen segment.
When your discovery team consistently returns from interviews with long feature wish-lists rather than sharp pictures of customer behaviour, the interviewing is failing — and it’s usually failing at rule 1 (talking about the idea instead of the life) or rule 2 (asking hypothetical future-tense questions). Coaching the team on these two rules is often the highest-leverage improvement a product leader can make in a single afternoon.
How RoadmapOne Helps
RoadmapOne makes the discovery capacity visible. You can tag a squad’s objective as “run 20 Mom Test interviews in segment X and synthesise the top three unmet problems” — a real outcome with a real measurable (see OKRs for product teams and objectives to key results ). The grid shows whether the team has protected capacity for that work; the RGT analytics tell the board how much of your capacity is actually on interview-driven learning.
Frequently Asked Questions
What is the Mom Test in customer interviews?
The Mom Test is a set of three rules — from Rob Fitzpatrick’s 2013 book — for running customer interviews that produce useful evidence rather than polite validation. Rule 1: talk about their life, not your idea. Rule 2: ask about specifics in the past, not hypotheticals about the future. Rule 3: talk less, listen more. The goal is interviews your mother couldn’t deceive you in — where even a loving, polite respondent ends up telling you the truth about their current problems and behaviour.
What are the three rules of the Mom Test?
(1) Talk about their life instead of your idea. (2) Ask about specifics in the past instead of generics or opinions about the future. (3) Talk less and listen more. Each rule counters a specific founder failure mode: pitching instead of listening, asking hypothetical questions that produce worthless forecasts, and dominating the conversation so the customer has no room to show you what their life is actually like.
Why are compliments bad data in customer interviews?
Because they are the default polite response, given regardless of whether the respondent would ever actually buy your product. A customer saying “that’s a great idea” tells you they’re polite, not that they’ll pay. Fitzpatrick calls compliments “shiny, distracting, and worthless”. The signal you want is commitment (time, reputation, money given up) or advancement (a concrete next step in a real funnel), not compliments. A meeting that ends with a compliment and no commitment failed, even if it felt like a success.
How many customer interviews do I need to run?
For a specific early-stage bet, plan for 20 interviews with target-segment customers. After 20 you’ll know whether the problem description is converging (good — you’ve found the segment) or diverging (bad — you’re talking to the wrong people). If it’s diverging, change who you’re talking to rather than running more interviews with the same kind of respondent. The problem-solution fit article covers this in more detail.
What’s the difference between commitment and advancement?
Commitment is the customer giving up something they value — time, reputation, or money. Tiered: a second call is mild commitment; introducing you to their boss is stronger; cutting a purchase order is decisive. Advancement is the conversation moving forward in a recognisable real-world funnel — discovery call → demo → stakeholder call → pilot. Good interviews produce one or both. Interviews that produce neither are conversations, not sales or discovery progress.
How does the Mom Test relate to other early-stage frameworks?
It’s the interviewing discipline that makes every upstream framework work. Problem-solution fit needs Mom-Test-grade interview evidence. Assumption mapping assumptions get tested via Mom-Test interviews as one of the cheapest Riskiest Assumption Test formats. Teresa Torres’s Opportunity Solution Tree bottom-layer experiments often include Mom Test interviews. The Mom Test isn’t an alternative to those frameworks; it’s the interviewing discipline that makes them produce real signal.
Conclusion
Rob Fitzpatrick’s The Mom Test is a book most product teams haven’t read and most product teams need to read. The three rules — talk about their life, ask about specifics in the past, talk less — are simple, easy to agree with, and very hard to follow. The teams that follow them consistently produce better products because they are genuinely learning; the teams that don’t produce a steady stream of unvalidated prototypes and can’t understand why nothing sticks.
In 2026 the gap has widened. Build cost has collapsed; interview quality has not. The single highest-leverage habit an early-stage product team can build in the AI era is Mom Test discipline — twenty honest conversations per quarter, commitment-and-advancement logged, synthesis shared with leadership, and the assumption map updated against what was learned. That pattern, repeated quarter after quarter, is the difference between teams that ship products people want and teams that don’t.
Buy the book. Read it this weekend. Run the first five interviews next week. You will be a better product team by Friday.
Baxter image prompt (photorealistic, 4:3): Baxter the wirehaired dachshund sitting across a small cafe table from an older-looking woman in a floral blouse — evidently his mother. Notebook open in front of him. He’s listening intently, ears forward, pencil poised but not writing. A teacup in front of her. The notebook has one word written at the top: “Specifically?” Sunlight through a window, the gentle self-conscious tension of a son interviewing his own mother without letting her off the hook.