Start with a testable hypothesis
Every idea hides assumptions about who has the problem, how painful it is, and whether people will pay for a solution.
Turn those assumptions into clear hypotheses.
For example: “Small accounting firms will pay for an automated invoice reconciliation tool because it saves at least three hours per week.” A good hypothesis names the customer, the problem, the desired outcome, and a measurable signal.
Prioritize riskiest assumptions
Not all assumptions are equal.

Use an “impact vs. uncertainty” lens to prioritize tests that address the riskiest, highest-impact assumptions first.
If you don’t know whether customers actually have the problem, test that before building the feature set. This keeps early effort focused on the questions that most affect viability.
Design rapid experiments
Low-cost, fast experiments let you learn without building the whole product. Common experiments include:
– Smoke tests / landing pages: Create a simple page describing the value and a call-to-action (sign up, join waitlist). Drive traffic with organic posts or inexpensive ads to measure interest.
– Concierge MVP: Manually deliver the service to a small group to observe actual behavior and refine the offering.
– Wizard of Oz: Present a polished interface that seems automated while the backend is manually operated.
– Pricing experiments: Offer tiered pricing or a poll to gauge willingness to pay and price sensitivity.
Run targeted customer interviews
Qualitative insights uncover motivations and context that metrics alone miss. Recruit interviewees who match your target profile and focus on behavior, not opinions. Effective questions:
– Tell me about the last time you dealt with [problem].
– How did you solve it? What alternatives did you consider?
– How often does this happen? How much time or money does it cost you?
– Would you pay for a solution? What would make you pay?
Avoid leading questions and hypothetical prompts like “Would you use this?” Instead ask them to describe real past behavior. Record interviews (with permission) and synthesize themes to spot patterns.
Measure learning, not vanity
Track metrics tied to your hypothesis: conversion rate from landing page, demo-to-paid conversion, retention after first use, and qualitative satisfaction signals.
The goal is to learn whether the core value proposition holds, not to hit “good” numbers immediately. If an experiment disproves an assumption, that’s progress—either pivot the value proposition or stop investing.
Iterate deliberately
Use the build-measure-learn loop.
When an experiment confirms your hypothesis, expand the scope or refine pricing and onboarding. When it fails, diagnose why—wrong customer segment, poor messaging, or the problem isn’t painful enough. Keep cycles short and experiments small to limit sunk cost.
Document decisions and evidence
Capture experiment designs, outcomes, and next steps in a shared, simple tracker.
Future fundraising conversations, hiring decisions, and prioritization will all be stronger when grounded in documented learning rather than anecdotes.
Move from validation to scale mindfully
Once core assumptions are validated, focus on retention and unit economics before scaling acquisition. Sustainable growth is easier when customers keep using the product and the cost to acquire them aligns with lifetime value.
Validating ideas fast is a discipline: define hypotheses, prioritize risk, run quick experiments, and listen to real customers.
That approach turns uncertain bets into informed decisions and builds a foundation for durable growth.