Workshop 2 of 7

Rapid Prototyping &
Assumption Testing

Turn your best idea into something testable β€” in hours, not weeks. Find out what you don't know before you build the wrong thing.

⏱ 3 hours
πŸ‘₯ Teams of 2–5
πŸ”§ Build Β· Test Β· Learn
πŸ€– AI simulation included
πŸ“ Continues from Workshop 1
IDEO Design Thinking Google Design Sprint Y Combinator Build–Measure–Learn
⏱ Phase 0
0:00–0:20 Β· 20 min
Phase 0 Β· Warm-Up

From Problem to Prototype Thinking 🎯

Three frameworks. Three minutes each. The fastest way to shift your brain from thinking about ideas to making something real.

Mindset β€” Three Schools of Prototyping

Why Fast Beats Perfect

Every great product company shares one habit: they stop talking about ideas and start making them tangible, fast. Not because they have resources β€” because a rough prototype teaches you more in 30 minutes than a week of discussion.

IDEO Design Thinking

"Build to Think"

  • Prototype to learn, not to present
  • "Fail often to succeed sooner"
  • Empathy first β€” watch real people
  • Low-fidelity beats no prototype
  • Bodystorming: act it out physically
Google Design Sprint

"Compress Time"

  • Map β†’ Sketch β†’ Decide β†’ Prototype β†’ Test
  • Realistic prototype in one day
  • 5-person test reveals 85% of issues
  • Note-and-vote decisions
  • Lightning demos for inspiration
Y Combinator

"Talk to Users"

  • "Do things that don't scale"
  • 10 conversations > 1 survey
  • Concierge MVP: do it by hand first
  • Launch fast, iterate always
  • "What would have to be true?"
Warm-Up Tool β€” Lightning Demos Google Sprint

3 Minutes: Steal Like an Artist

Before building, get inspired by what already exists. Each team member spends 3 minutes finding one product, service, or solution anywhere in the world that tackles a similar problem β€” any industry.

Share in 30 seconds each: "This is [X]. What I want to steal from it is [Y]." Write stolen ideas on sticky notes. These become ingredients for your prototype.

"Good artists copy. Great artists steal." β€” Picasso (also Steve Jobs)
Activity Β· 10 min β€” Pick Your Idea

Revisit Your Wanted Poster β€” Choose One Idea to Test

1

Take out your Wanted Poster from Workshop 1. Read the root cause and problem statement aloud as a team.

2

Each member shares one idea from Crazy 8s in 60 seconds β€” no debate yet, just share.

3

Dot-vote (2 dots each). Pick the idea with most votes. If tied, pick the one where you are most uncertain β€” that's the one most worth testing today.

4

Write your belief statement: "We believe that [IDEA] will help [WHO] solve [PROBLEM]."

⏱ Phase 1
0:20–0:55 Β· 35 min
πŸ—‚
Phase 1 Β· Assume

Surface Your Assumptions πŸ—Ί

Every idea is a stack of unproven beliefs. Get them on the table and find the one that would kill your idea if it turns out to be wrong.

Tool 1 β€” Assumption Mapping IDEO YC

Brain-dump Every Assumption (15 min)

On Print 01, write every belief your idea depends on. One assumption per sticky note.

CategoryThe question to askExample
CustomerDoes this problem actually exist for them?"Students find the canteen queue frustrating enough to act"
BehaviourWill they change what they currently do?"They'll check an app before walking to lunch"
ValueIs our solution clearly better than the alternative?"Real-time queue data is worth downloading an app for"
Willingness to payWould they pay, subscribe, or consistently use this?"Canteen operators would pay €50/month for this data"
AccessCan we actually reach these people?"We can get canteen staff to install sensors"
TechnicalCan this actually be built?"Queue length can be estimated from camera footage"
Tool 2 β€” Risk Matrix Google Sprint

Find the Riskiest Assumption (10 min)

Draw a 2Γ—2 on Print 01. Plot each assumption on two axes. The top-left quadrant (high importance + low certainty) is your Critical Assumption. Circle it in red β€” your prototype must test this today.

1

Y-axis: Importance β€” If wrong, does the whole idea collapse?

2

X-axis: Certainty β€” Real evidence, or just a hunch?

3

Top-left = Critical Assumption. Circle it. That's what you're testing today.

Tool 3 β€” "How Might We" Reframing IDEO

Turn the Problem into a Design Challenge (10 min)

Created by IDEO and used in every Google Design Sprint. Take your critical assumption and reframe it as an open question β€” an invitation to solve rather than a problem to complain about.

Formula: How Might We [verb] [user's need] so that [positive outcome]?

Example: Campus Food App
"HMW help students make faster lunch decisions without needing real-time data?"
"HMW make the lunch experience feel less like a gamble?"
"HMW help canteens signal their best moments to students proactively?"

Write 3 HMW questions on Print 02. Vote on the one most exciting to solve. This is your design challenge for the prototype sprint.

πŸ’‘ YC Framework

"What Would Have to Be True?"

Y Combinator partners use this question with every startup pitch: "For your idea to work, what would have to be true about the world?" If your answers sound unlikely β€” you've found your critical assumption.

⏱ Phase 2
0:55–1:30 Β· 35 min
Phase 2 Β· Prototype

Build the Minimum Testable Thing πŸ”§

Choose a technique from IDEO, Google or YC. Build the cheapest thing that can test your critical assumption. Ugly is fine. Done is mandatory.

Core Principle

The Prototype Is a Question Made Tangible

You are not building a product. You are building the minimum thing needed to get a genuine reaction from a real person. The moment someone interacts with it, you learn something no conversation can give you.

"If a picture is worth a thousand words, a prototype is worth a thousand meetings." β€” IDEO
Step 1 β€” Choose Your Technique (5 min)

7 Prototype Formats β€” Pick One

πŸ“„
Paper Prototype
IDEO
Sketch screens, service flows, or a product layout by hand. Show it as-is β€” no explanation needed.
⏱ 15–20 min to build
🎭
Role-Play / Bodystorm
IDEO
Act out the experience. One person plays the "system," one plays the user. No materials needed β€” just space and courage.
⏱ 10 min to prepare
πŸ§™
Wizard of Oz
IDEO
Fake a technology with humans behind the scenes. The user thinks it's automated β€” it isn't yet. Classic for AI/app concepts.
⏱ 10 min to set up
πŸ–Ό
6-Panel Storyboard
Google Sprint
Comic-strip the user journey across 6 panels: problem β†’ discovery β†’ use β†’ result. Use Print 04.
⏱ 15–20 min to draw
πŸ“‹
Fake Landing Page
Google Sprint / YC
One page: what it is, who it's for, what it does, one call-to-action. Write the copy only β€” no code. Use AI to generate it fast.
⏱ 15 min (with AI)
πŸ“¬
Concierge MVP
Y Combinator
Deliver the service manually for one person. No technology β€” just do it by hand. Prove the value exists before automating anything.
⏱ Do it live today
πŸ“Š
Survey / Sign-up Form
Digital
A Google Form or Typeform as your first "product." One question that tests intent: "Would you use this? Leave your email." Measure real interest.
⏱ 10 min to create
Step 2 β€” Build Sprint (30 min)

Build Focused on Your Critical Assumption

Use Print 03 β€” Prototype Plan to plan before building:

1

Write your critical assumption at the top. Everything you build must test this β€” nothing else.

2

Find the one moment in your idea where the user either "gets it" or doesn't. Build that moment first.

3

If you spend more than 5 minutes on any element β€” simplify it. Ugly is fine. Missing features are fine. Testable is mandatory.

4

Before finishing, write exactly 3 questions to ask real people. Open-ended: "What do you think this is?" not leading: "Do you like it?"

⚠️ The Three Prototype Traps

Common Mistakes

Building too much: You're testing one assumption. Don't add features. More fidelity doesn't mean more insight β€” it means more wasted time.

Pitching instead of showing: Put the prototype in front of someone and stay quiet. The moment you explain it, you've invalidated the test β€” you want their natural reaction.

Testing on friends who want to be nice: YC calls this the "Mom test failure." Find people who match your target user and have no reason to be polite about your idea.

⏱ AI Simulation
1:30–1:55 Β· 25 min
AI Simulation Phase Β· Before Field Testing

Test with AI Before Testing with Humans πŸ€–

Run a rapid digital simulation before your field sprint. Get your first round of feedback in minutes β€” then go outside sharper and better prepared. Choose 1–2 activities based on time.

πŸ€– AI Activity 1 β€” User Interview Simulation Claude / ChatGPT

Roleplay Your Target User

Before testing with real people, rehearse with AI. Give it a precise persona and ask it to react to your prototype as a real user would. You'll find obvious holes before going outside.

Copy β†’ Paste into Claude or ChatGPT

User Interview Simulation Prompt

You are [TARGET USER β€” e.g. "a 21-year-old undergrad student at a busy urban university who eats lunch on campus 4 days a week"]. You have no knowledge of any startup solving this problem. You've agreed to talk to a student team for 3 minutes. The team will show you their prototype. React honestly and naturally β€” not politely. You can be confused, indifferent, or excited. Ask questions a real person would ask. Their idea: [DESCRIBE YOUR PROTOTYPE IN 2 SENTENCES] Critical assumption they're testing: [YOUR ASSUMPTION] First, react to seeing the prototype for the first time. Then answer the 3 questions they ask. Be realistic β€” most people are less excited than founders expect.
1

One team member pastes this into Claude or ChatGPT and describes your prototype.

2

Another team member "interviews" the AI using your 3 prepared questions.

3

Note every surprise or confusion. These are likely real reactions you'll get outside too.

4

Refine your 3 questions or tweak your prototype based on what you learned. Then go test for real.

πŸ€– AI Activity 2 β€” Red Team Attack Claude / ChatGPT YC Style

Ask AI to Destroy Your Idea

Y Combinator investors challenge every startup with hard questions. Simulate this before your real test β€” find the weaknesses yourself first.

Red Team Prompt

Stress-Test Your Idea

Act as a tough but fair Y Combinator investor. I'm going to pitch my startup idea and I want you to challenge it hard. My idea: [DESCRIBE YOUR IDEA β€” what it is, who it's for, how it works] Please give me: 1. The 3 most likely reasons this idea will fail 2. The 3 things a competitor could do to make our solution irrelevant 3. The one assumption that sounds obvious but is probably wrong 4. A "pre-mortem": it's 18 months from now and this failed β€” what happened? Be direct. Don't soften the feedback.

After reading: which criticisms surprised you? Which worry you most? Does your prototype test any of these risks?

πŸ€– AI Activity 3 β€” Generate a Digital Prototype AI Tool Google Sprint Style

Build a Testable Landing Page in 10 Minutes

Use AI to generate landing page copy, then paste it into a free tool (Google Forms, Carrd.co, or Notion) to create a testable digital artefact before the field sprint.

Landing Page Copy Prompt

Generate Your Landing Page

Write a simple landing page for this concept: [DESCRIBE YOUR IDEA] Target user: [WHO] Core benefit: [WHAT PROBLEM IT SOLVES IN ONE LINE] How it works: [3-STEP PROCESS] Include: - A headline (max 8 words, punchy) - A subheadline (1 sentence β€” who it's for and what it does) - 3 benefit bullet points - A call-to-action button label (e.g. "Get early access") - 2 FAQs a sceptical user would ask Tone: simple, direct, no jargon. This is a concept test, not a real launch.
1

Copy the output into Carrd.co (free, no-code, 5 min to publish) or paste into a Google Doc.

2

Add a Google Form link as your "Sign up" button β€” see if people actually click.

3

Show this on your phone in the field test alongside your paper prototype.

πŸ’‘ AI vs. Real Users β€” Know the Difference

What Each Is Good For

AI simulation is good for…Real users are irreplaceable for…
Finding obvious logical holes in your ideaGenuine emotional reactions you didn't expect
Practising your interview questionsBody language, hesitation, real confusion
Red-teaming before investors or advisorsBehavioural insights β€” what people do vs. say
Rapidly generating testable digital artefactsThe moment someone picks up your prototype and tries to use it
Synthesising patterns from interview notesDiscovering problems you didn't know existed
⏱ Field Test
1:55–2:25 Β· 30 min
πŸ—‚

Exercise Kit β€” Field Test

🌍 Outside the classroom · 30 min

Test with Real People β€” Google Sprint Protocol

You've simulated with AI. Now test with humans. Go to corridors, the cafeteria, library, or the campus square. Find 3–5 people who match your target user.

01
Recruit quickly
"Do you have 3 minutes? We're students testing an idea for class β€” no selling, just a quick look." Target people who look like your user.
02
Set context, not solution
Say: "We're working on something for [context]. Before I show you anything β€” when did you last deal with [problem]?" Listen first.
03
Show, don't explain
Put the prototype in front of them. First question: "What do you think this is?" Stay completely silent. Count to 10 in your head before speaking.
04
Record everything
One person asks questions, one person writes β€” including reactions, pauses, and body language. Note what they say AND what they do.
GDS Interview Notes Format Google Sprint

"I noticed… I wonder… What if…"

For each person tested, capture three layers on Print 05:

I

I noticed: Observable facts only β€” what did the person actually say or do? No interpretation yet.

W

I wonder: What question does this raise? What don't you understand about their reaction?

?

What if: One quick "what if we changed X" idea triggered by this observation.

πŸ’‘ The Mom Test β€” YC Principle

Ask Questions They Can't Lie About

Don't ask: "Would you use this?" / "Is this a good idea?" β€” people say yes to be polite.

Ask instead: "How do you currently deal with this?" / "Tell me about the last time this happened to you." / "What have you already tried?"

IDEO rule: If they say your idea is great, ask "What would make you NOT use it?" That's where real insight lives.

Why 5 People Is Enough

The Google Sprint Research Principle

Research cited in the Google Design Sprint shows that 5 user tests reveal ~85% of all usability issues. After 5 people you'll start hearing the same things β€” that's your signal. Three strong conversations beat twenty weak surveys.

⏱ Phase 3
2:25–2:45 Β· 20 min
πŸ—‚

Exercise Kit β€” Phase 3

Phase 3 Β· Learn

Synthesise What You Found πŸ’¬

Back in the room. Turn raw observations β€” AI and human β€” into clear learning. Did your critical assumption hold up?

Lightning Synthesis (15 min) IDEO Google Sprint

Find the Pattern

1

Brain dump (3 min): Everyone shares observations β€” facts only, no interpretation. What did people say? What did they do?

2

Theme hunt (5 min): Group similar observations on Print 06. Look for patterns across all people you tested with.

3

Compare AI vs. real (3 min): Did the AI simulation predict any of what you found? What did real humans reveal that AI missed?

4

The verdict (4 min): Was your critical assumption confirmed, contradicted, or unclear? Write: "We now know that [INSIGHT] because [EVIDENCE]."

πŸ€– AI Activity 4 β€” Pattern Synthesis Claude / ChatGPT

Feed Your Notes to AI β€” Get Instant Patterns

Synthesis Prompt

Find Patterns in Interview Notes

Here are raw notes from user interviews we ran today testing our prototype. Each interview is separated by "---". [PASTE YOUR NOTES HERE] Please: 1. Identify the top 3 patterns across all interviews 2. Share the one quote that best captures how users feel 3. State whether our assumption "[YOUR ASSUMPTION]" appears confirmed, contradicted, or unclear 4. Suggest the single most important follow-up question we still need to answer
πŸ” The Build–Measure–Learn Loop in Action YC

Example: Campus Food App

Build: Paper prototype + AI-generated landing page showing "live queue times for campus canteens."

AI simulation found: AI persona said "I'd download it but probably forget to check it." Red team flagged habit formation as the biggest risk β€” not the technology.

Real test found: 3 of 4 students said they don't change where they eat β€” they eat where their friends are going. Queue length isn't the problem. Social coordination is.

Learning: Critical assumption CONTRADICTED. New insight: students plan lunch based on who they're eating with, not queue length.

Next loop: What if the solution helped groups coordinate where to eat together?

⏱ Phase 4
2:45–3:00 Β· 15 min
Phase 4 Β· Decide

Pivot, Persist, or Zoom? πŸ”„

Based on what you learned β€” from AI and real humans β€” what should you do next? This decision separates good entrepreneurs from those who keep building the wrong thing.

Three Directions Y Combinator

Make a Deliberate Choice

βœ…

Persist

Your assumption held up. Keep going β€” build more, test more, narrow your target user further.

Evidence confirmed your core idea
↩️

Pivot

The problem is real but your solution needs to change. Adjust who you serve, what you build, or how you deliver value.

Problem real β€” solution wrong
πŸ”

Zoom

Your idea is too broad. Focus on the one moment or user segment that resonated most β€” do that one thing brilliantly.

Too many directions β€” focus needed
Team Decision β€” Note-and-Vote Google Sprint

Decide Together in 10 Minutes

1

Each person writes silently: "I think we should [persist / pivot / zoom] because the evidence shows [X]." (90 seconds)

2

Share. If aligned β€” great. If not, the disagreement is itself a signal: what assumptions are you still split on?

3

Write your updated belief statement: "We now believe [UPDATED IDEA] will help [WHO] do [WHAT]."

4

Set up for Workshop 3 (MVP): What is the one cheapest test you could run before next session?

πŸ€– AI Activity 5 β€” Prepare for MVP Claude / ChatGPT

Define Your Minimum Viable Product Direction

MVP Direction Prompt

What's the Simplest Version We Could Launch?

We are a student team building: [UPDATED IDEA] Target user: [WHO] What we learned today: [KEY INSIGHT FROM TESTING] Direction chosen: [PERSIST / PIVOT / ZOOM β€” and why] Next workshop is about building an MVP. Please suggest: 1. The simplest version of this idea testable with real users next week (no code required if possible) 2. The 3 features that MUST exist in version 1 (and 3 that can wait) 3. One metric that would prove people actually want this 4. The single riskiest thing still untested that we must validate before building anything bigger

πŸ” Wrap-Up β€” 1 minute per team

What was your critical assumption β€” and was it confirmed or contradicted?
What surprised you most: the AI simulation, or the real user test?
What is your updated belief statement after today?
Persist, pivot, or zoom β€” and what is your one next test before Workshop 3?