Welcome — turn an idea into a working prototype, by yourself
Who: students in the Entrepreneurship and Innovation course who completed Workshops 1 and 2 and are preparing for Workshop 3. No coding experience expected.
Outcome: turn an idea into a working, shareable web prototype by yourself, in under 2 hours, using only conversation with AI.
Module flow at a glance
Pick your pace
Read sections 1–5 over coffee, then do all five exercises in one 2-hour block.
Sections 1–2 on day 1, sections 3–5 on day 2, one exercise per day for five days.
Read sections 1, 4 and 7. Do exercises 1, 4 and 5.
5-minute setup before you start
Definition
Vibe coding is the practice of building software by describing intent in natural language to an AI model, accepting most of the generated code without manually reviewing every line, and iterating through conversation until the result feels right.
The term was popularised by AI researcher Andrej Karpathy in early 2025. It formalises a new division of labour: the human is the product owner, the AI is the implementer, and the interface between them is plain language.
Why it matters for entrepreneurs
Three things have always stood between a founder and a testable prototype: time, money, and technical skill. Vibe coding collapses all three.
What used to take two weeks now takes two hours.
You do not have to hire anyone to validate an idea.
The bottleneck moves from syntax to problem framing, which entrepreneurs are already trained in.
Vibe coding vs. traditional coding vs. no-code
| Traditional | No-code | Vibe coding | |
|---|---|---|---|
| Input | Syntax | Drag-and-drop | Natural language |
| Flexibility | Unlimited | Template-limited | Very high |
| Learning curve | Months | Days | Hours |
| Best for | Production | Marketing sites | MVPs, tests, tools |
When to use and when not
Landing pages, waitlists, smoke tests, internal tools, dashboards, calculators, classroom aids, MVPs, idea validation.
Anything with real user accounts, payments, or regulated data. Prototype it — but get an engineer before launch.
Mission-critical systems, medical devices, financial infrastructure.
Q: A friend wants to test whether students would pay for a "rent a lab partner" app. Is vibe coding appropriate?
A: Yes — this is the smoke-test use case. Build a landing page with a signup form in under an hour, share it, and measure interest before investing in a full build.
The six pillars at a glance
Describing clearly what the software should do, for whom, and what "done" looks like, before worrying about how.
Writing effective instructions: role, context, task, constraints, examples, output format.
Knowing what the AI can see and when to start fresh versus continue.
Run, observe, describe the gap, ask for a fix. Small loops beat big prompts.
Say "when I click X, nothing happens" instead of fixing the code yourself. The AI is the debugger.
Getting your work out of the preview and onto a URL that real users can open.
Pillar 1 — Intent and Specification
Most failed sessions fail here, before any code is written. A good specification names the user, the key actions, and the success signal.
Test: if you can finish the sentence "A [user] opens this to [do action] and leaves feeling [emotion]", you are ready to prompt.
Pillar 2 — Prompting
Prompts are briefings, not questions. Give the AI four things: Role, Context, Task, Constraints. "Build me a study buddy app" is a question. "You are a senior frontend developer. Context: a stressed first-year student on a phone. Task: build a single-page waitlist form with a counter. Constraints: single HTML file, under 200 lines, mobile-first" is a briefing.
Pillar 3 — Context Management
AI models have a "window" — a limited amount of text they can hold in mind. Over a long chat, older context becomes stale. Symptoms: the AI forgets, re-introduces bugs, contradicts itself.
Remedy: start a new chat, paste the current code, re-establish the brief. This is not failure — it is hygiene.
Pillar 4 — Iteration Loops
Professionals never try to get everything right in one prompt. They aim for a working (ugly) version in 5 minutes, then refine. One change per iteration.
Pillar 5 — Testing by Description
You do not need to read code. You need to use it. Click every button. Try weird inputs. Open it on your phone. Then describe what you observed, not what the code should do.
Pillar 6 — Deployment and Sharing
A prototype that only works on your laptop cannot validate anything. Use one free deploy button (Vercel or Netlify) and send a real URL to real users. See Section 3 for tools.
Q: You have chatted with Claude for 40 minutes. It keeps breaking a feature and reintroducing the same bug. Which pillar is failing and what do you do?
A: Pillar 3, Context Management. The chat has grown too long. Start a new chat, paste the current working code, write a fresh brief. You will be productive again in minutes.
A — General-purpose chat AI
Anthropic. Generates "Artifacts" — live previews of web apps in the browser. Best for learning and single-file prototypes.
OpenAI. "Canvas" mode shows code next to chat. Best for beginners and brainstorming.
Very generous free context window. Best as a second opinion when another model is stuck.
European alternative. Solid free tier, good for writing and code.
Research-oriented AI with live web search. Great companion while you build.
Strong coding model. Generous free usage.
B — Browser "build me an app" tools
Describe an app, get a working web app with live preview and one-click deploy. Daily token limit on free plan.
Similar to Bolt. Strong for landing pages and SaaS UIs. Limited daily messages on free tier.
From Vercel. Best-in-class for beautiful UI components.
Cloud IDE that builds full-stack apps with databases. Limited credits on free plan.
Google's AI app builder (successor to Project IDX). Full-stack, Google-integrated.
Quick app generator with a polished UI. Good for SaaS-style prototypes.
C — AI code editors (needed in Workshop 3)
Microsoft's free editor plus GitHub Copilot Free (around 2,000 completions/month).
Purpose-built AI editor. Two-week Pro trial, then generous free tier.
Strong free tier. Often free where Cursor starts charging.
Fast open-source editor with AI hooks. Lightweight alternative.
Open-source Cursor alternative. Bring your own free API keys.
D — Hosting and sharing
Where your code lives on the internet.
One-click deploy. Connect GitHub, push, site updates in 30 seconds.
Vercel alternative. Equally capable for static sites.
Free hosting built into GitHub. Perfect for single-file experiments.
Generous free tier. Great global performance, simple GitHub integration.
E — Helpful free extras
Turn any HTML form into working email capture. 50 submissions/month free.
If you just need answers, skip the code.
For logos and favicons in 3 minutes.
Record a demo video of your MVP for submission in Workshop 3.
Scenario-based tool stacks — pick one that matches your project
Instead of one "recommended stack", here are five scenarios with a primary option and competitive alternatives. Use the primary if you want the safest path; try the alternatives if the primary hits a free-tier limit or doesn't fit your style.
e.g. a calculator, quiz, Five Whys form, Crazy 8s timer, Lean Canvas template.
Smoke-test a startup idea. Headline, 3 benefits, form, waitlist counter.
Sign-ups, lists, dashboards, a concierge form that routes to a Sheet.
An interactive piece that investors or judges can click during your pitch.
A small interactive aid that ties to a lecture and can be forked by students.
The 5-step loop
First-prompt template (copy and fill in)
Worked example
Specify: A first-year student opens this to find a study partner in the same course. Done = they have submitted their email and course, and see a "147 others already on the list" counter.
Follow-ups, one per turn:
Refine vs. restart — the decision table
| Refine when | Restart when |
|---|---|
| Code mostly works | Same bug returns 3+ times |
| You change one thing at a time | AI contradicts earlier decisions |
| Each iteration gets closer | Code gets longer but worse |
45 min · Bolt.new or v0.dev · Stage: idea validation
The story. Marta had a hunch that Vilnius dog owners would pay for a neighbourhood dog-walking cooperative. Instead of building the service, she spent 45 minutes in Bolt.new creating a single landing page: a headline ("Never walk alone — find a dog-walking buddy on your street"), three benefit bullets, a fake "237 owners already signed up" counter, and an email-capture form connected via Formspree. She shared the link in two Facebook groups and spent 5€ on an Instagram ad targeting her city. Within 48 hours, 74 people had submitted their email. That was enough signal to move to Scenario 2.
Why this works. You are not building a product. You are building a question: "Do enough people want this?" The landing page is the cheapest possible experiment. If nobody signs up, you saved weeks. If they do, you have a list of early adopters.
Lessons. The counter is fake — that is fine for a smoke test. What matters is whether real people click "Join". If you get fewer than 20 sign-ups per 1,000 impressions, the idea needs reshaping before you write another line of code.
1 hour · Lovable.dev + Formspree · Stage: early revenue
The story. Jonas validated demand for a weekly meal-planning service for busy parents (Scenario 1 got 60 sign-ups). Instead of building an algorithm, he created a simple intake form with Lovable.dev: family size, dietary restrictions, budget, preferred cuisines, delivery day. When a parent submitted the form, Formspree emailed Jonas the answers. He spent 30 minutes per family creating a meal plan in a Google Doc and emailed it back. This is the concierge MVP — you are the product. Three families paid 9€/week before any code existed.
Why this works. This is how Airbnb started (the founders personally photographed apartments) and how DoorDash started (the founders personally delivered food). You learn what customers actually need — not what you imagine they need — because you fulfil every order by hand and hear every complaint.
Lessons. Track how long each manual fulfilment takes. When it exceeds 2 hours per week, that is your signal to automate. Until then, manual is faster and teaches you more.
20 min · Claude.ai · Stage: lead generation
The story. Gabija was building a tutoring marketplace. To attract tutors, she needed their attention. She built a "What's your tutoring hourly rate worth?" calculator in Claude.ai: tutors enter hours per week, subject, and city, and the tool shows estimated monthly income, tax estimate, and a comparison to the market average. She embedded it on her landing page. Tutors who used the calculator were 4x more likely to sign up than those who just read the landing page.
Why this works. A calculator or quiz gives users something immediately valuable in exchange for their attention (and optionally their email). It demonstrates domain knowledge — you are not just asking for sign-ups, you are proving you understand their world. Classic lead magnet strategy adapted to vibe coding speed.
Lessons. The data does not need to be perfect — use best estimates and label them clearly ("estimates based on public data"). The goal is engagement, not accounting precision. Add an email gate ("Enter your email to save your results") to capture leads.
1 hour · v0.dev + Vercel · Stage: fundraising / competitions
The story. Lukas was pitching a student mental health check-in tool at a startup competition. Instead of another slide saying "here's a mockup", he showed the judges a real URL. They could open it on their phones during the pitch: a simple daily mood check-in with a 7-day streak tracker and an anonymous class mood heatmap. It took Lukas 1 hour in v0.dev, and the judges remembered his pitch over 15 others — because they could touch it.
Why this works. Investors and judges see hundreds of decks. Almost none of them include a URL the audience can click during the presentation. A working demo — even a simple one — signals execution ability and seriousness. You do not need a complete product. You need one interactive screen that makes the concept tangible.
Lessons. Deploy to Vercel before the pitch and put the short URL on your last slide. "Try it now: mood.vercel.app" is more powerful than any mockup screenshot. Test the URL on the venue WiFi before presenting.
1 hour · Claude.ai · Stage: proof of concept
The story. Egle wanted to help first-time founders structure their thinking. She asked Claude.ai to build an artifact where users paste a one-paragraph business idea and instantly get a pre-filled Lean Canvas with all 9 blocks populated. The AI inside Claude's artifact analysed the input and generated customer segments, value propositions, revenue streams — all from a single paragraph. She shared the artifact link with 20 classmates. Twelve used it and said it saved them an hour of thinking.
Why this works. Claude artifacts can call the AI model that powers them — meaning you can build AI-enabled tools without API keys, servers, or billing. This is the closest thing to magic in the vibe coding toolbox: your prototype can think. Use it for anything that transforms unstructured input into structured output: idea analysis, feedback summarisation, content generation, categorisation.
Lessons. This only works inside Claude artifacts (because the artifact has access to the model). If you need this on a standalone website, you would need an API key — which is not free. For prototyping and classroom use, the artifact approach is perfect.
30 min · Claude.ai · Stage: teaching & engagement
The story. A teaching assistant for a design thinking course needed a Five Whys exercise for 80 students. Commercial tools were expensive and overcomplicated. In 30 minutes with Claude.ai, she built a single HTML file: five expandable "Why?" fields, a progress bar, and a summary box that compiled the root-cause chain. She hosted it on GitHub Pages for free. Every student opened the same URL on their phone and worked through their own problem. The TA collected results by asking students to screenshot their summary.
Why this works. Classroom tools need to be dead simple: one URL, no login, works on any phone. A single HTML file meets all three criteria. Because the file is self-contained, students can fork it (download and modify) for their own projects. The professor gets a custom tool that fits the exact pedagogy, instead of bending the course to fit a generic platform.
Lessons. Build the simplest version first (text inputs only), then add polish in iteration 2 (progress bar, colours, export). Students do not need features — they need clarity. If you want to collect results, add a "Copy summary to clipboard" button instead of building a backend.
1 hour · Claude.ai + CSV · Stage: analysis & storytelling
The story. Tomas had 6 months of customer survey data in an Excel file for his food-delivery startup analysis. His professor asked for "insights, not spreadsheets". He exported the file as CSV, dropped it into Claude.ai, and asked for an interactive dashboard. Claude built an HTML page with three charts (satisfaction trend, NPS by age group, top complaint categories), a filter dropdown, and a summary paragraph. The whole thing took 50 minutes and looked better than anything he could have built in Excel.
Why this works. Founders and students often have data but lack the visualisation skills to make it speak. Claude can read CSV files, understand the structure, and generate Chart.js or Recharts dashboards directly. The result is a standalone HTML file you can present in class, embed in a pitch deck, or host as a live URL. No Python, no Tableau, no learning curve.
Lessons. Clean your CSV before pasting — remove empty rows, fix column names, use consistent date formats. The cleaner the input, the better the output. If the dataset is large (500+ rows), paste only a representative sample and ask Claude to structure the code so you can swap in the full dataset later.
20 min · Claude.ai · Stage: practice & personal use
The story. Dovile was writing her bachelor thesis and kept losing track of her daily writing sessions. In 20 minutes, she built a "Thesis Tracker" in Claude.ai: a simple page where she logs how many words she wrote each day, sees a progress bar toward her 12,000-word goal, and gets a streak counter ("5 days in a row!"). She bookmarked it on her phone and used it every evening. It was not a startup — it was a tool she built for herself in the time it takes to watch a YouTube video.
Why this works. This is the fastest way to practise the vibe coding loop because the stakes are zero and the user is you. You know exactly what "done" looks like because you are the customer. A personal micro-tool is the ideal first project: small scope, immediate feedback, real daily use. Other ideas: Pomodoro timer, reading log, habit tracker, exam countdown, budget tracker for a trip.
Lessons. localStorage means data lives in your browser only. If you clear your browser data, the log disappears. For a personal tool this is fine. For anything shared, you would need a backend (or use Google Sheets as a free database via Formspree).
Q: A classmate wants to validate whether Vilnius restaurant owners would pay for a weekly menu-planning tool. Which scenarios, and in what order?
A: First Scenario 1 (smoke-test landing page): a waitlist with zero product. If interest looks real, move to Scenario 2 (concierge form) where you deliver menu plans manually in a Google Doc. Only then build a real product. This is the classic Lean Startup sequence: measure demand → deliver manually → automate what works.
Purpose: Experience the full cycle from prompt to working artifact for the first time. This is your "Hello World" moment — proof that conversation creates software.
Tool: Claude.ai (free account).
Step-by-step walkthrough:
What you should see: A full-screen page with your name in large bold text, a coloured gradient behind it, and a button. Each click changes the background to new random colours. The artifact panel has a "Copy" button — you can download this as an HTML file.
Common mistakes:
Deliverable: Working artifact plus 2 sentences answering: "What surprised me about this experience?"
Reflection prompts: Was it faster or slower than you expected? Did the result look more or less professional than you imagined? What would you change first?
Purpose: Practice the iteration loop (Pillar 4) and testing by description (Pillar 5). You will make three changes, one at a time, and learn how to recover when the AI breaks something.
Tool: Same Claude.ai chat from Exercise 1.
Step-by-step walkthrough:
What you should see after all 3 changes: A responsive page with your name, a gradient background that changes on click, a counter that tracks clicks, and a confetti animation at 10 clicks.
When something breaks (and it probably will):
Common mistakes:
Deliverable: Working page with all 3 features plus 3 sentences: "Where did the AI break something and how did I recover?"
Reflection prompts: How did it feel to debug by describing instead of coding? Which change was hardest for the AI to get right? Why do you think that was?
Purpose: Apply specification skills (Pillar 1) and build something genuinely useful. This is the first exercise where you write your own prompt from scratch instead of copying one.
Tool: Claude.ai (new chat — fresh context).
Pick ONE of these three options:
5 progressive "Why?" questions. A progress bar. A summary card at the end showing the root-cause chain.
8 boxes on screen. An 8-minute countdown. A beep each minute when one box should be done. Drawing or text input per box.
All 9 Lean Canvas blocks as editable fields. A "Copy to clipboard" button that exports the canvas as formatted text.
Step-by-step walkthrough:
.html. Open it in your browser to verify it works outside Claude.What you should have at the end: A standalone HTML file that works in any browser, does one useful thing for students, and looks reasonably good.
Common mistakes:
Deliverable: Saved HTML file you can bring to Workshop 3 plus 2 sentences: "What was the hardest part of writing my own prompt?"
Reflection prompts: Did your first prompt produce what you expected? How many iterations did it take? What would you specify differently next time?
Purpose: Use a browser-based builder (not just a chat AI) to create a multi-section landing page. Practice deployment (Pillar 6) by getting a real public URL.
Tool: Bolt.new or Lovable.dev (free tier, sign in with Google).
Step-by-step walkthrough:
What you should have at the end: A professional-looking landing page for your idea, accessible at a real URL that anyone can open.
Common mistakes:
Deliverable: Working landing page (screenshot or URL if deployed) plus 2 sentences: "What is the biggest difference between a chat AI (Exercise 1–3) and a browser builder (this exercise)?"
Reflection prompts: Was the builder faster or slower than Claude for this task? When would you choose a chat AI vs. a builder? Did deployment feel harder or easier than expected?
Purpose: Practice the art of polish — making something good into something professional through prompt-only iteration. This is the exercise closest to real product work: you already have a working version, now you make it shine.
Tool: Same tool and project from Exercise 4 (Bolt.new or Lovable.dev), or use Claude.ai if you did not deploy.
Step-by-step walkthrough:
What you should have at the end: A polished, professional-looking page that you would not be embarrassed to share with a stranger. At least 5 improvements applied through prompts alone.
Common mistakes:
Deliverable: Refined page (screenshot or URL) plus a 1-paragraph reflection answering: "Where did the AI struggle, and where did I struggle to describe what I wanted?"
Reflection prompts: Which type of change was easiest for the AI (visual, functional, content)? Which was hardest? How would you explain the difference between "prompting for a feature" and "prompting for polish" to a friend?
10 tips that save hours
Giant prompts break things. Ask for a single modification per turn.
Paste a screenshot of a site you like. Words are slow; pixels are fast.
For prototypes this reduces broken builds enormously.
Paste the current code and re-brief. Context has gone stale.
Paste code and ask "what would you do differently?"
Reuse what worked. Your best prompts are assets.
"Stressed first-year on a phone" beats "make it simple".
Half of bugs only show there.
If nothing works in 60 min, the prompt is wrong, not the AI.
Copy HTML out of the artifact every 15 minutes.
5 classic traps
Three prompt patterns to memorise
Keep everything the same except: [one specific change]. Return the full file.
Here is my current working code. [paste]. Change only: [X]. Return the full file. Do not remove any existing features.
Before writing code, list three risks or edge cases I might have missed. Then implement.
Accounts
Understanding
Practice
Workshop 3 day
Glossary — the twelve words you will hear
Artifact: In Claude.ai, a live-preview panel rendering code next to chat.
Context window: How much text an AI can hold in mind at once.
Deploy: Put your code on a public URL.
Frontend: What users see — buttons, forms, layout.
Backend: Server side — databases, accounts, payments. Usually not needed for prototypes.
MVP: Minimum Viable Product. Smallest thing that lets a real user do the real thing.
Prompt: Natural-language instruction to an AI.
Prompt engineering: The craft of writing prompts that produce the result you want.
React: A popular way of building interactive web interfaces.
Repo: Folder of code with change history, usually on GitHub.
Smoke test: Fast, cheap experiment to reveal whether anyone wants what you are building.
Token: Unit AI models use to measure text. Free tiers often cap daily tokens.
You turn what you built here into a deployed MVP using an AI, GitHub, and Vercel or other — all free. Because you have already run the loop five times, Workshop 3 will be about the product, not the tools.
See you in Workshop 3. Bring your vibes.