← All Blog Articles

Assumption Mapping: David Bland's 2×2 for Deciding What to Test First

Assumption Mapping: David Bland's 2×2 for Deciding What to Test First

Every early-stage product is a stack of assumptions. Some are load-bearing — if they’re wrong the whole thing collapses. Some are incidental — they’d be nice to confirm but nothing rests on them. Teams that don’t tell the two apart spend their discovery capacity testing the easy, unimportant assumptions (because those tests are fun to design) and never get around to the ones that would kill the idea if they failed.

Assumption mapping is the workshop technique that sorts assumptions by importance and evidence. Plot every assumption on a 2×2: importance on one axis, evidence on the other. The top-right quadrant — important, no evidence — is where the leap-of-faith assumptions live, and those are the ones that get tested first with a Riskiest Assumption Test . Everything else waits, or is accepted as a known risk.

The technique is simple. The discipline is rare. Most teams skip it entirely and test whatever assumption is most convenient to test — which is almost never the one most likely to kill the idea.

Assumption mapping is a workshop technique, canonised by David Bland and Alex Osterwalder in Testing Business Ideas (Strategyzer, 2019), for identifying the leap-of-faith assumptions in a product or business hypothesis and sequencing experiments accordingly. Teams plot assumptions on a 2×2 grid — important vs unimportant on one axis, known vs unknown (or evidence vs no evidence) on the other. The top-right quadrant (important + unknown) contains the assumptions that should be tested first. Bland categorises hypothesis types as Desirability, Feasibility, Viability, and Adaptability.

My Personal Experience

TL;DR: An assumption map is only worth the paper it’s drawn on if the assumptions underneath it are tied to a committed outcome somebody has signed up to deliver. I have seen beautifully-facilitated assumption-mapping workshops produce elegant 2×2s, high-quality sticky notes, and dot-voted priorities — all on behalf of bets that had no business case, no sized market, no TAM/SAM analysis by phase, and no committed revenue line in the budget. That is the single cleanest tell that the discovery work is theatre. You cannot prioritise assumptions by importance if there is no outcome whose importance you are measuring them against.

The teams that are any good at this almost always run an assumption map at the start of every new bet and have a business case behind it. The teams that aren’t will explain at length why they don’t need either — usually while the bet quietly dies of an assumption nobody ever tested. A static assumption map that hasn’t been updated in 90 days is as bad as none. A live one, dated, updating quarter-to-quarter, tied to a revenue commitment someone will stand behind, is one of the cleanest signals of a healthy discovery operating model I know.

What Assumption Mapping Actually Is

David Bland and Alex Osterwalder’s Testing Business Ideas (Wiley/Strategyzer, 2019) is the canonical text. Its antecedents — Jeff Gothelf and Josh Seiden’s Lean UX (2013), Strategyzer’s Business Model Canvas work, and the Lean Startup lineage — give the technique academic provenance, but Bland’s book is where assumption mapping as a practical workshop format lives.

The workshop is compact. Two to four hours. Cross-functional attendance — product, engineering, design, and ideally commercial. The sequence:

  1. State the hypothesis clearly. One sentence that the team is actually trying to test. “Small B2B SaaS companies will pay £99/month to coordinate roadmap capacity across multiple squads.”
  2. Individually silent-generate assumptions. Sticky notes, five to ten minutes, no discussion. You’ll get more candidates than in a group brainstorm because quiet voices don’t get steam-rolled.
  3. Cluster into hypothesis types. Bland’s current formulation is four: Desirability (do people want this?), Feasibility (can we build it?), Viability (does the business work?), and Adaptability (can we respond to change as we scale?). Adaptability is a relatively recent addition and still missing from most online primers — but it matters, especially for platform bets.
  4. Plot on the 2×2. Horizontal axis: important (top) → unimportant (bottom). Vertical axis: unknown / no evidence (right) → known / have evidence (left). The top-right quadrant is where the leap-of-faith assumptions live. Every team’s first instinct is to argue assumptions into the left column (“we kind of know this”); resist that — honest “no evidence” is the signal.
  5. Dot-vote inside the top-right quadrant. The team picks the top three to five to test first.
  6. Commit to experiments. For each top-right assumption, define the Riskiest Assumption Test that would validate or invalidate it cheapest, and set the kill criterion in advance.

The map is a living document. Update it after every experiment. Assumptions move left (as evidence accrues) or get killed (evidence shows them false, which kills or pivots the idea). New assumptions appear as the product evolves. A dated, versioned assumption map is one of the best artefacts a discovery team can maintain.

The Four Hypothesis Types (Desirability, Feasibility, Viability, Adaptability)

Bland’s four categories map neatly onto Cagan’s four product risks , with one additional dimension worth calling out.

Bland category Cagan risk Question tested
Desirability Value risk Do enough of the right people want this?
Feasibility Feasibility risk Can we build it (scale, reliability, cost)?
Viability Viability risk Does the business case work (CAC/LTV, channel-model, regulation)?
Adaptability — (Bland’s addition) Can we evolve the product/org without breaking it as the market moves?

In 2026, feasibility risk has dropped sharply — AI tooling has made most software feasibility questions answerable in a week. Desirability and viability assumptions now dominate the top-right quadrant. Adaptability matters mostly for platform and marketplace bets where you’re locking in design choices that are expensive to reverse.

Teresa Torres’s treatment in Continuous Discovery Habits uses a slightly different split — Desirability, Viability, Feasibility, Usability, and Ethical — with usability and ethical often pulled out from the broader desirability bucket. Either taxonomy works; the point is to stop the team lumping everything into one pile so they see they’re ignoring specific risk categories.

The 2026 Reframe: Assumption Mapping Is Now the Dominant Cost

This is the AI-era argument that threads through the whole early-stage validation cluster . Pre-AI, the dominant cost in early-stage product work was building. Build cost acted as a crude filter — teams that couldn’t afford to build the wrong thing often didn’t build at all, which was itself a form of validation.

In 2026 building is effectively free. That means the cost of early-stage product work has shifted almost entirely onto the validation side. Customer interviews, RATs, assumption testing — these are now the dominant cost. And because they’re the dominant cost, they’re also the dominant lever. Teams that run disciplined assumption mapping reduce total cost of early-stage work by killing bad bets before the (admittedly small) build cost is incurred and before the (much larger) sales, support, and opportunity-cost consequences start accruing.

The assumption map is also your cheapest insurance against the AI-era failure mode of shipping ten plausible products in a quarter with no meaningful signal on any of them. An assumption map makes the “we’re not actually testing anything” state visible in a way that feature ticks on a burndown never will.

Assumption Mapping and the Opportunity Solution Tree

Teresa Torres’s Opportunity Solution Tree is a complementary framework, not a competing one. The tree flows top-down: Outcome → Opportunities → Solutions → Assumption Tests. The assumption map sits at the bottom layer; the test cards for any given leaf solution come straight from the assumptions plotted as important-and-unknown for that solution.

The two-framework pattern I recommend: use the OST for the strategic shape of discovery (which opportunities, which solutions), and use assumption mapping as the tactical workshop that sequences the experiments within each solution branch. Torres’s compass is the OST overall — the team’s navigational aid for continuous discovery. The assumption map is the navigational aid for a specific bet within that tree.

Running an assumption mapping workshop without a broader discovery strategy — no OST, no outcome, no opportunity — risks ending up with beautifully-mapped assumptions for an idea that shouldn’t be on the roadmap at all. Start with the outcome; derive the opportunities; pick a solution; then assumption-map.

Products, Not Companies: Portfolio Assumption Maps

A mature company with a portfolio of products doesn’t have one assumption map. It has one per product in discovery. The board or exec team benefits from a portfolio view of assumption maps — one map per Transform bet, laid side by side, updated quarterly.

This surfaces a question most organisations never ask explicitly: across our early-stage bets, are we testing the assumptions that actually matter, and are we testing them in parallel across bets rather than sequentially? An organisation with five Transform bets each waiting to run one assumption test at a time is moving glacially. Three bets each with two or three experiments in parallel is a healthier pattern — and if the organisation can’t support that level of parallel experimentation, the question becomes should we have three bets live at all, or should we pick two and resource them properly?

See the Three Horizons and Run / Grow / Transform frames for the portfolio-allocation conversation the assumption map opens.

The Business Case Precondition: Importance Is Defined by the Committed Outcome

You cannot meaningfully place assumptions on the importance axis of the 2×2 without a clearly defined outcome. Importance is always importance to something — and in early-stage product work that something is the revenue line, customer-segment win, or market-entry target that the bet is supposed to deliver.

This is why the single highest-leverage intervention before running an assumption-mapping workshop is making sure there’s a proper business case behind the bet: market sized, customers segmented, TAM and SAM by phase, unit economics modelled, revenue line committed in the budget, and a named person who has signed up to deliver against that line. With that in place, the importance axis has real meaning — an assumption’s importance is measured against what it does to the committed outcome. Without it, every assumption looks equally important because there’s nothing to measure importance against. That’s why theatre assumption maps end up with twenty notes in the top-right quadrant and no way to sequence the experiments that follow.

Discovery in service of a committed outcome is rigorous work. Discovery without a committed outcome is an expensive form of ideation. The assumption map surfaces the difference — because a theatre map can’t survive even five minutes of “what outcome is this assumption’s importance measured against?” from the board.

The Side-of-Desk Anti-Pattern

You cannot run meaningful assumption mapping on 10% of a PM’s time and “whenever the team has spare capacity”. The workshop itself is cheap — two to four hours — but the follow-up is not. Each top-right assumption produces an experiment that takes one to three weeks to run. If the team responsible for running those experiments also has a delivery commitment, the delivery work wins every time and the assumptions sit unvalidated indefinitely.

The familiar fix: a dedicated minimum viable team for any Transform bet worth pursuing. Two engineers and a product person, full-time, with a proper business case. The first 90 days of their backlog is an assumption map and a sequence of experiments, not a feature list. Measure them on killed-or-validated assumptions, not shipped scope — see outcome-based roadmaps and outcome vs output vs input for the measurement discipline, and WIP limits to protect the team from interrupt work.

If the organisation can’t afford that commitment for a bet, the honest answer is to kill the bet — not to limp along with side-of-desk assumption work that never concludes anything. The middle option is the most expensive one. See priority whiplash for why the pattern is so common and so corrosive.

The PE / NED Diagnostic: Reading an Assumption Map from the Board Seat

When a portfolio company brings an early-stage bet to the board, the assumption map is one of the cleanest artefacts to interrogate. Here’s what I look for:

  1. Is there an assumption map at all? A surprising number of teams don’t have one. That alone tells you the discovery operating model isn’t mature.
  2. Is it dated, and when was it last updated? Static maps mean stopped learning. Look for version history.
  3. What’s in the top-right quadrant? If everything is in the left column — “we kind of know all this” — the team is overclaiming evidence. If nothing’s in the top-right, they’re being dishonest with themselves.
  4. Are desirability and viability assumptions represented? In 2026, most risk is concentrated there. If the top-right is all feasibility assumptions, the team is testing what’s comfortable rather than what’s scary.
  5. What’s been tested since last quarter? Two to four experiments per quarter is a healthy cadence for a dedicated team. Zero is a problem. Ten is usually theatre — tests too shallow to produce real evidence.
  6. What’s moved? Assumptions should have migrated from top-right to left-centre (validated) or been marked killed (invalidated). An immobile map is a non-learning team.

The best conversation I ever had as a NED with a portfolio CEO went: “Tell me which three assumptions you’re testing this quarter and what would have to happen to kill each one.” If they can answer crisply, the bet is real. If they can’t, the bet is theatre dressed up in product vocabulary.

Assumption Mapping and the Product Operating Model

Assumption mapping is structurally a Cagan / SVPG practice. It presupposes an empowered team that’s trusted to identify and run its own experiments — not a feature team executing someone else’s wish list. Without the product operating model it becomes theatre: the team runs the workshop, produces a beautiful map, and then is told which features to ship regardless.

Dual-track agile is the tactical home for assumption mapping. The discovery track is where the map lives; the experiments run in that track; validated solutions then feed the delivery track. Teams without a dual track cannot maintain the discipline — discovery and delivery on the same track turns into all-delivery every single time.

See the product discovery cluster for deeper treatment, particularly measuring discovery success (the assumption map is one of the three or four artefacts you’d expect to inspect there) and leading discovery activities .

How RoadmapOne Helps

RoadmapOne makes the capacity side of assumption mapping visible. You can tag a Transform squad’s objective as “run the assumption map for the [new product] bet” — an outcome, not an output (see objectives to key results ). The grid shows you whether the team is actually protected for that work or whether they’ve been pulled onto Run fires, which is the single biggest failure mode for assumption-mapping discipline. The Run / Grow / Transform analytics tell the board what percentage of capacity is on discovery-stage assumption work — usually much less than anyone thinks.

Frequently Asked Questions

What is assumption mapping?

Assumption mapping is a workshop technique, canonised by David Bland and Alex Osterwalder in Testing Business Ideas (2019), for identifying the leap-of-faith assumptions in a product idea and sequencing experiments accordingly. Teams plot assumptions on a 2×2 grid — importance on one axis, evidence (known/unknown) on the other. The top-right quadrant (important and unknown) contains the assumptions that should be tested first, using the cheapest possible experiment — typically a Riskiest Assumption Test .

What are the axes of the assumption mapping 2×2?

Horizontal: important (top) → unimportant (bottom) — how load-bearing this assumption is for the idea. Vertical: unknown / no evidence (right) → known / evidence in hand (left) — how much validated evidence you currently have. The top-right quadrant (important + unknown) is where the “leap of faith” assumptions live. Teams that honestly populate this quadrant then sequence their experiments by which leap-of-faith assumption, if wrong, kills the idea fastest.

How is assumption mapping different from the Opportunity Solution Tree?

Teresa Torres’s Opportunity Solution Tree is the strategic shape of continuous discovery — Outcome → Opportunities → Solutions → Assumption Tests. The assumption map is the tactical workshop that sequences experiments within any given solution branch. Use the OST for what’s worth exploring; use the assumption map for which assumption to test first once you’ve picked a solution to explore. They’re complementary — neither replaces the other. Most mature discovery operating models use both.

How often should we run an assumption mapping workshop?

For a new Transform bet, at the start. Then re-visit and update the map after every significant experiment — probably every two to four weeks during active discovery. A quarterly refresh is the minimum cadence for a map to stay credible. Static maps that haven’t been updated in three months are worse than having no map at all, because they give leadership false confidence that discovery is happening when it isn’t.

Who should attend an assumption mapping workshop?

Cross-functional: product, engineering, design, and ideally commercial (sales or finance). Four to eight people is the sweet spot — enough diversity of perspective to surface assumptions the PM would miss alone, few enough that individual voices still land. Customer research or data team presence is a strong bonus if you have them; they’ll challenge assumptions the rest of the team treats as self-evident. The workshop itself is two to four hours.

What are the four hypothesis types in assumption mapping?

David Bland’s current formulation: Desirability (do people want this?), Feasibility (can we build it?), Viability (does the business case work?), and Adaptability (can we evolve the product/org without breaking it?). The first three map directly onto Marty Cagan’s four product risks ; Adaptability is Bland’s addition and matters most for platform and marketplace bets where early architectural choices are expensive to reverse later.

Conclusion

Assumption mapping is one of those small, unglamorous disciplines that separates teams that find good products from teams that don’t. The workshop itself is short — two to four hours — but the habit of updating the map as evidence accrues is the discipline that compounds. Most teams never build that habit. The ones that do are the ones that ship the right products faster, because they’ve killed the wrong ones cheaper.

In 2026, when building is nearly free and validation is the dominant cost curve, the assumption map is the single highest-leverage artefact a discovery team produces. It tells the board what’s being learned, tells the team what to test next, and tells leadership where to allocate scarce experimentation capacity. Run the workshop; pin the 2×2 on a wall (virtual or otherwise); update it weekly; inspect it quarterly at the board. Do these four things and you’ll spend your discovery capacity on the experiments that actually matter, rather than the ones that happen to be convenient.


Baxter image prompt (photorealistic, 4:3): Baxter the wirehaired dachshund in a charcoal turtleneck and round architect glasses, arranging a wall of pastel sticky notes into a large 2×2 grid drawn in marker on a whiteboard. The axes are labelled IMPORTANCE (vertical) and EVIDENCE (horizontal). Several sticky notes already clustered in the top-right quadrant, marked with a red dot. Others stuck crooked to the left. A coffee cup beside him, steam rising. Diffuse office light, end-of-workshop mood — satisfied rather than triumphant.