PivotBuddy

Unlock This Playbook

Create a free account to access execution playbooks

9 Comprehensive Playbooks
Access to Free-Tier AI Tools
Save Progress & Bookmarks
Create Free Account
Chapter 3 of 9

Chapter 3: Pre-MVP Validation Techniques

Wizard of Oz, Concierge MVP, and Fake Door testing.

What You'll Learn By the end of this chapter, you'll master three powerful pretotyping techniques—Wizard of Oz, Concierge MVP, and Fake Door testing—and know when to use each for maximum learning with minimum code.

The Most Efficient Code is Code You Never Write

Most founders skip straight to building. They assume the only way to test an idea is to create a working product. This is the most expensive mistake in entrepreneurship.

"Pretotyping" (a term coined by Alberto Savoia) involves simulating the core experience of a product to validate demand and usage patterns with near-zero engineering effort. The goal: answer the question "Should we build this?" before asking "Can we build this?"

The Core Insight

You can test whether customers will pay for your solution, use your solution, and love your solution—all without writing a single line of code. This chapter shows you how.

The Three Pretotyping Techniques

Each technique has its place. The key is matching the technique to the assumption you're testing:

Wizard of Oz

Best for: Testing if your solution works

Users think it's automated; humans do the work behind the scenes

Concierge

Best for: Deep problem discovery

Explicit high-touch service; learn exactly what customers need

Fake Door

Best for: Testing demand at scale

Measure real intent without building anything

Technique #1: The Wizard of Oz

Create a highly polished, functional-looking interface where humans secretly do the work that would eventually be automated. The user believes they're interacting with technology.

The Bug

"We need to build the AI before we can test if customers want it."

Building complex technology (AI, ML, automation) takes months. If customers don't want what it produces, those months are wasted.

The Fix

Use Wizard of Oz to fake the automation.

Build only the interface. Have humans do the "AI" work behind the scenes. If customers love the output, then build the real tech.

Case Study: Aardvark → Acquired by Google

Aardvark was a social search engine where users could ask questions and get answers from their network. The founders needed to validate that users would trust the service.

The Wizard of Oz approach: Instead of building complex routing algorithms, they employed interns to manually search for answers and route questions to the right experts. Users thought it was automated.

Result: They validated that users wanted the service before investing in the AI. Google acquired them for $50M.

When to Use Wizard of Oz

Perfect For

  • AI/ML-powered features
  • Recommendation engines
  • Automated workflows
  • Smart assistants
  • Personalization systems
  • Complex matching algorithms
  • Natural language processing
  • Predictive features

How to Run a Wizard of Oz Experiment

Step-by-Step Process

  1. Build the interface only. Create a functional-looking frontend with forms, buttons, and outputs—but no backend logic.
  2. Set up the "curtain." Create a system for humans to receive user inputs (email, Slack, dashboard) and provide outputs.
  3. Recruit 5-20 pilot users. Enough to learn, not enough to overwhelm your manual process.
  4. Operate manually. When users submit requests, humans fulfill them and make it look automated.
  5. Measure and learn. Track usage patterns, satisfaction, and—critically—willingness to pay.
  6. Decide: If validated, build the automation. If not, pivot before wasting engineering time.
Ethical Considerations

Be thoughtful about deception. Some argue users should be informed it's "powered by our team" during beta. Others argue the test is invalid if they know. At minimum, ensure the output quality matches what the automated version would deliver.

Technique #2: The Concierge MVP

Similar to Wizard of Oz in its manual execution, but with a critical difference: the human involvement is explicit. You deliver the value as a high-touch service, openly acknowledging that it's not yet automated.

The Bug

"We think we know what customers need. Let's build it."

You've never actually delivered the value you're planning to automate. You're guessing at the workflow, the pain points, the edge cases.

The Fix

Become the product yourself.

Deliver the value manually to 5-10 customers. Stand next to them. Watch them struggle. Learn exactly what they need before you automate anything.

Case Study: Food on the Table

Manuel Rosso founded a meal planning service. Instead of building an app, he started with one customer.

The Concierge approach: He visited the customer's home, reviewed her preferences, went to her local grocery store, and manually planned her meals for the week. He was the app.

What he learned: Exactly how customers make decisions, which constraints matter, and what features were essential. This informed every product decision. The company eventually raised $5M.

When to Use Concierge

Perfect For

  • Problem discovery: You're not sure exactly what customers need
  • Service businesses: The value is partly human expertise
  • Complex workflows: You need to understand edge cases before automating
  • B2B products: High-touch sales and delivery are expected
  • Premium positioning: "White glove" service can command higher prices

The Concierge Learning Framework

Every concierge interaction should generate learning. Use this framework:

After Each Customer Interaction

Friction Points:Where did they get confused or frustrated?
Delighters:What made them unexpectedly happy?
Edge Cases:What scenarios didn't we anticipate?
Feature Requests:What did they ask for that we don't offer?
Willingness to Pay:Did they pay? Would they pay more?

Technique #3: The Fake Door (Smoke Test)

Create a marketing asset (landing page, ad, or button) for a feature that doesn't exist. When users click "Buy" or "Sign Up," show them a waitlist or "Coming Soon" message. This measures actual behavior—not what people say they'd do.

The Bug

"Everyone we talked to said they'd buy it."

People are polite. They say yes to hypothetical purchases. But stated intent ≠ actual behavior. You need to measure what they DO, not what they SAY.

The Fix

Run a Fake Door test to measure real intent.

Create a landing page with a "Buy Now" button. Run ads. See how many people actually click—and ideally, enter payment info before you reveal the waitlist.

How to Run a Fake Door Experiment

Step-by-Step Process

  1. Create a landing page that clearly describes your value proposition, pricing, and includes a prominent CTA button ("Buy Now," "Start Free Trial," etc.)
  2. Drive targeted traffic using paid ads ($200-500 is enough for initial signal) to your ideal customer segment
  3. On CTA click, show a waitlist page: "Thanks for your interest! We're currently in private beta. Enter your email to get early access."
  4. Measure conversion rates: Visitor → CTA Click, CTA Click → Email Submit
  5. For stronger signal: Take users to a checkout flow before revealing the waitlist (measures intent to pay, not just interest)

Fake Door Benchmark Thresholds

What "Good" Looks Like

MetricWeak SignalModerate SignalStrong Signal
Landing → CTA Click<2%2-5%>5%
CTA → Email Submit<20%20-40%>40%
CTA → Credit Card Entry<1%1-3%>3%

Note: Benchmarks vary by industry and price point. B2B typically has lower volume but higher intent.

The Limitation of Fake Door

Fake Door tests measure interest, not utility or retention. They tell you if people want to buy—not if they'll keep using. Use Fake Door for demand validation, then follow up with Wizard of Oz or Concierge to validate the experience.

Choosing the Right Technique

Match your pretotyping technique to the assumption you need to test:

What You're Testing Best Technique Why
"Is there demand for this?" Fake Door Measures actual purchase intent at scale
"What price will they pay?" Fake Door + A/B Test different price points on landing pages
"Will the solution work?" Wizard of Oz Test the full experience without building backend
"What do they actually need?" Concierge Deep discovery through high-touch service
"Will they keep using it?" Wizard of Oz Measure retention over multiple interactions
"What features matter?" Concierge Learn from manual delivery before automating

The Pretotyping Decision Tree

Quick Decision Guide

  1. Start with Fake Door to validate basic demand exists
  2. If demand is validated → Run Concierge with 5-10 customers to deeply understand needs
  3. Once you understand needs → Run Wizard of Oz to test the specific solution at scale
  4. If all three pass → NOW you build with confidence

Key Takeaways

Remember These Truths
  1. The best code is code you never write. Pretotyping lets you validate before engineering.
  2. Fake Door tests interest. Use it to measure demand before building anything.
  3. Wizard of Oz tests the solution. Fake the automation to validate the experience.
  4. Concierge drives discovery. Become the product to learn what to build.
  5. Match technique to assumption. Different questions require different tests.

Now that you can validate demand without building, let's explore how to prioritize features and make build-vs-buy decisions when you do start developing.

Design Your Pretotype Experiment with AI

Plan and track your validation experiments—whether Wizard of Oz, Concierge, or Fake Door—with our AI-powered experiment design tools.

Ready to Build Your MVP?

LeanPivot.ai provides 50+ AI-powered tools to help you design, build, and launch your minimum viable product.

Start Free Today
Works Cited & Recommended Reading
RAT vs MVP Philosophy
  • 1. Ries, E. (2011). The Lean Startup. Crown Business.
  • 2. "Why RAT (Riskiest Assumption Test) beats MVP every time." LinkedIn
  • 3. "Pretotyping: The Art of Innovation." Pretotyping.org
  • 6. "Continuous Discovery: Product Trio." Product Talk
  • 7. "MVP Fidelity Spectrum Guide." SVPG
Minimum Lovable Product
  • 8. Olsen, D. (2015). The Lean Product Playbook. Wiley.
  • 9. "From MVP to MLP: Why 'Viable' Is No Longer Enough." First Round Review
  • 10. "Minimum Lovable Product framework." Amplitude Blog
Hypothesis-Driven Development
Assumption Mapping
  • 15. Bland, D. & Osterwalder, A. (2019). Testing Business Ideas. Wiley.
  • 16. "Risk vs. Knowledge Matrix." Miro Templates
  • 17. "Identifying Riskiest Assumptions." Intercom Blog
User Story & Impact Mapping
  • 20. Patton, J. (2014). User Story Mapping. O'Reilly Media.
  • 21. Adzic, G. (2012). Impact Mapping. Provoking Thoughts.
  • 22. "Jobs-to-Be-Done Story Framework." JTBD.info
  • 23. "The INVEST Criteria for User Stories." Agile Alliance
  • 24. "North Star Metric Framework." Amplitude
  • 25. "Opportunity Solution Trees." Product Talk
  • 26. Torres, T. (2021). Continuous Discovery Habits. Product Talk LLC.
Pretotyping Techniques
Prioritization Frameworks
Build vs Buy & No-Code
Metrics & Analytics
Launch Operations & Analysis

This playbook synthesizes methodologies from Lean Startup, Design Thinking, Jobs-to-Be-Done, Pretotyping, and modern product management practices. References are provided for deeper exploration of each topic.