Question types, conditional logic, mobile-first respondent UX, NPS analytics, and AI summarization — how to build a Typeform alternative that actually has a niche reason to exist.
Research-based methodology. This guide pulls from public Typeform and Tally architecture posts, the SurveyJS open-source codebase, NPS literature from Bain & Company, Anthropic’s prompt engineering docs, and our own builds with Claude. Where we have first-person experience we say so; otherwise we’re working from public sources. How we research.
The survey market looks crowded from the outside. Typeform owns conversion-style forms, SurveyMonkey owns enterprise research, Google Forms is free, and Tally is the indie darling for “simple but pretty.” But under those tools there are millions of teams using surveys badly: a Google Form for NPS that nobody analyzes, a Typeform that costs $99/month for 200 responses, a SurveyMonkey export that nobody opens. The wedge for a solo founder in 2026 is vertical specificity: an NPS tool for SaaS companies, a post-purchase survey tool for Shopify stores, an employee pulse tool for sub-50-person teams. Generic survey tools treat every question the same. A good vertical survey tool knows what an NPS score means and what a 4-star CSAT response should trigger.
This guide is for someone who wants to ship a paid survey product in 3–5 weeks using Claude as their primary thinking partner. We’ll spend most of our time on the parts you can’t copy from a tutorial: the question-type discriminator that doesn’t paint you into a corner, the conditional logic engine, and the analytics that make the data actually useful. If you want the broader build playbook first, our How to build a SaaS with Claude guide covers the general scaffolding workflow this one builds on top of.
Most builders see “a list of questions and a list of answers” and assume two tables will do it. Two tables will get you to a demo. They will not get you to a paying customer.
A short-text answer is a string. A multi-select is an array. A rating is an integer in a fixed range. An NPS is a 0–10 integer with a hard meaning. A matrix is a map of row-id to column-id. A file upload is a URL plus mime-type plus size. If you store all of these as answer_text, your analytics layer is doomed. The right move is a response_answers table with a question_type discriminator and typed columns: value_text, value_number, value_array, value_json. Validation lives at write time, not in the chart code.
The moment you ship “skip to question 7 if answer to question 3 is X,” you have a directed graph of question dependencies. If you model it as if/else inside the survey-renderer code, you cannot answer questions like “which questions did this respondent actually see?” or “what’s the completion rate for the conditional branch?” You need a conditions table where each row is a rule that applies to a question.
Roughly 70% of survey responses come from phones. If your survey UX is “a long scroll of inputs,” your completion rate is half of what Typeform’s is. The single-question-per-page paradigm Typeform pioneered isn’t a stylistic choice — it’s the only respondent UX that holds up on a 5-inch screen with one thumb.
Every survey tool can collect 500 free-text answers. Almost none of them help you read 500 free-text answers. This is where Claude earns its keep: a one-prompt summary of every open-ended question, themed and counted, is the feature that makes a customer renew.
The data model for a survey SaaS is small but the type discipline is everything. You need workspaces, surveys, questions (with type and ordering), conditions, responses, and response_answers. Get the question-type discriminator right with Claude and the rest of the build is downhill.
I'm building a survey SaaS positioned as an NPS tool for SaaS companies. Eventually it'll support general surveys, but the schema needs to handle all common question types from day one. Question types I need to support: - short_text, long_text - single_choice, multi_choice (with array of options stored on question) - rating (1-5 stars), nps (0-10), csat (1-5 emoji) - scale (1-N with custom labels), matrix (rows x columns) - file_upload, date, email, url Design the complete Postgres schema with these tables: - workspaces, users, surveys, questions, conditions, responses, response_answers - A response represents one respondent's session through a survey - A response_answer is one answer to one question, with TYPED columns: value_text, value_number, value_array (jsonb), value_json (jsonb) - A CHECK constraint enforces the right typed column is populated for each question_type (e.g. nps must populate value_number, multi_choice must populate value_array) - Questions have an order_index, a settings jsonb (for type-specific config like nps follow-up label), and a required boolean - Conditions table: question_id, operator (equals, not_equals, contains, greater_than), value, action (show, hide, jump_to_question_id) Output one SQL file with tables, indexes for the hot queries (load full survey for rendering, list responses for a survey paginated, aggregate answers by question), and Supabase RLS so: - Workspace owners can read/write only their own workspace data - Public anonymous respondents can INSERT into responses + response_answers via an RPC `submit_response` only, never direct insert - Public anonymous can SELECT a survey + its questions + conditions, but only if survey.status = 'published'
Push back on Claude’s first draft if it tries to put answers in a single value column with a type-tag. That works for a demo and breaks the moment you write your first chart query. Insist on the typed-columns pattern. Claude will get there with one nudge.
This is the feature that separates a survey tool from a form builder. Logic lets you say “if the respondent picked ‘churned customer,’ skip the upsell question and ask about cancellation reason.” The temptation is to bake this into the React renderer. Don’t. The logic must be evaluable from both the frontend (to decide what to render next) and the backend (to validate that the responses you got are consistent).
Write a pure TypeScript function `evaluateConditions(state, conditions)`
that takes:
- state: a map of question_id -> answer_value (the partial response so far)
- conditions: array of { question_id, operator, value, action,
jump_to_question_id }
It returns a structured plan:
{
visibleQuestionIds: string[], // questions to show given current state
nextQuestionId: string | null, // resolved next question after jumps
hiddenByConditions: string[] // questions explicitly hidden
}
Operators to implement:
- equals, not_equals (exact match on text or number)
- contains (substring for text, includes for arrays)
- greater_than, less_than (number/date)
- is_answered, is_not_answered (any non-empty value)
Actions:
- show / hide a target question
- jump_to_question_id from the current question on submit
Edge cases:
- A question hidden by conditions cannot be required-blocking
- Circular jumps must terminate (track visited question_ids)
- If multiple conditions target the same question, hide takes precedence
over show
Write Vitest unit tests covering: simple show/hide, jump-on-answer,
circular protection, and the matrix-question case where the answer is
a map.
One non-obvious payoff: because evaluateConditions is pure, you can run it server-side at submit time to confirm the respondent only answered questions that were actually visible to them. This catches forged submissions and also catches your own logic bugs in production data.
The respondent page is what determines whether you have a 22% completion rate or a 71% one. Single question per page, large tap targets, keyboard auto-advance for multiple choice, no header, no footer, no logo — the page is the question.
Build the respondent UX at /s/[survey-slug] using Next.js App Router and
Tailwind. Mobile-first — the layout starts at 375px wide and scales up.
Behavior:
- Server component fetches the survey + questions + conditions in one
query
- Client component renders ONE question at a time, full viewport
- Progress dots/bar at the top showing position in survey, but accounting
for conditional visibility (don't promise 10 questions then ask 6)
- Keyboard: arrow keys advance for single_choice, Enter for text, A-J
hotkeys for choices 1-10
- Auto-advance for single_choice and rating (no extra "Next" tap)
- For long_text, do NOT auto-advance — require explicit Next
- Persist partial state to localStorage every keystroke + to the server
every 5 seconds via supabase.rpc('save_partial_response')
- On final submit, call supabase.rpc('submit_response') and redirect to
/s/[survey-slug]/thanks
- If the respondent comes back to the URL after closing, prompt to resume
Question type renderers I need:
- short_text: single-line input, autofocus
- long_text: textarea, no auto-advance
- single_choice: large tap-target buttons stacked vertically
- multi_choice: same buttons but toggle selected, with explicit Next
- rating: 5 large stars, tap to set
- nps: horizontal 0-10 strip with color gradient (red to green)
- csat: 5 emoji faces
- file_upload: drag-or-tap, upload to Supabase Storage with signed URL
Use Tailwind transitions between questions (slide left out, slide right
in) but respect prefers-reduced-motion.
Two non-obvious wins: prefilling from URL parameters (so a user link from a logged-in session can pass ?email=&user_id=) and respecting prefers-reduced-motion. The second one is a real accessibility issue and is one line of code.
The analytics dashboard is what your customer logs in to. If it’s generic charts on top of generic data, you are SurveyMonkey. The win is that each question type has a correct chart and a correct headline metric. NPS gets a -100 to +100 score. CSAT gets a percent-satisfied. Rating gets an average. Multi-choice gets a horizontal bar chart with response counts. Open-ended gets the AI summary from step 5.
Write a Postgres view + a TypeScript renderer that produces analytics
for one survey question.
For each question_type, the analytics output should be:
- nps: { score: number (-100..100), promoters: int, passives: int,
detractors: int, total: int, distribution: int[11] }
Score = ((promoters - detractors) / total) * 100
Promoters = answers 9-10, passives 7-8, detractors 0-6
- csat: { satisfied_pct: number, total: int, distribution: int[5] }
Satisfied = answers 4-5
- rating (1-5 stars): { average: number, total: int, distribution: int[5] }
- single_choice: { total: int, options: [{ value, count, pct }] }
Sorted by count descending
- multi_choice: same shape as single_choice but each respondent can hit
multiple options
- short_text/long_text: { total: int, sample: string[] (first 5),
ai_summary: string | null }
- matrix: { total: int, rows: [{ row_id, columns: [{ col_id, count }] }] }
Write:
1. A Postgres function `aggregate_question(question_id uuid)` that returns
jsonb with the right shape for the question_type
2. A React component ` `
that picks the right chart subcomponent per type. Use Recharts.
3. The NPS chart specifically should display a big score number, the
3-bar promoter/passive/detractor breakdown, and a small 0-10 distribution
bar chart underneath
Make the aggregate function fast: GROUP BY on the typed columns, no
client-side aggregation.
If you’re positioning for SaaS NPS specifically, add cohort filtering on top: NPS by signup month, NPS by plan tier, NPS by feature usage. That requires the survey URL to accept passthrough params and the response to store them in metadata jsonb. It’s the sort of feature you’d build with patterns from our analytics dashboard SaaS guide.
This is the feature you put on your landing page. “Read 500 open-ended answers in 30 seconds.” The implementation is one Claude call per question, run on demand, cached on the question_id + last response timestamp.
Write a Supabase Edge Function `summarize_open_ended(question_id)` that:
1. Loads all response_answers.value_text where question_id = $1 and
value_text is non-empty
2. If the count is 0, returns { summary: null, themes: [] }
3. Otherwise calls Claude with this system prompt:
"You are an analyst reading raw survey responses. Your job is to find
the 3-7 dominant themes and quantify how many respondents mentioned each.
Output STRICT JSON of shape:
{
themes: [
{ name: string (max 6 words), count: number, sample_quote: string }
],
one_line_summary: string (max 30 words),
notable_outliers: string[] (max 3 short quotes that don't fit any theme)
}
Rules:
- Themes should be specific, not generic. 'Pricing too high' not 'Pricing'.
- Counts must be exact, not approximate
- sample_quote must be a verbatim quote from the input, not paraphrased
- If fewer than 5 responses, return all of them as themes with count 1
- Never invent quotes"
4. The user message contains:
"QUESTION: \nRESPONSES (one per line):\n"
5. Validate Claude's response against a Zod schema. On schema violation,
retry once with a stricter follow-up. On second failure, return error.
6. Cache the result on a `question_summaries` table keyed by
(question_id, last_response_at). Invalidate when a new response lands.
Use the latest claude-3-5-sonnet model. Set max_tokens to 1500 and
temperature to 0.2 for consistency.
The reason to enforce strict JSON output and validate it: hallucinated “themes” with fake counts will silently destroy customer trust. One bad summary in front of a CEO and they’ll never use the feature again.
Three viable models for a survey SaaS, each suited to a different niche:
Avoid free-forever plans with response caps so low (Typeform’s 10) that nobody can ever validate the product. A 14-day full-feature trial converts much better. For payments, the standard Stripe-vs-Lemon-Squeezy tradeoff applies — we covered it in our feedback collection SaaS guide, which overlaps heavily with this build.
You will not out-feature Typeform. Typeform has 400+ engineers and a 10-year head start on the conversion-rate research. You will not out-price Tally either — Tally is genuinely free for most users. Solo founders win in three places:
Each of these is a real micro SaaS idea with thousands of potential customers. Pick one. Talk to twenty of them. Build the published-survey URL they’ll actually share before you build the workspace settings page.
A survey SaaS that stores answers in typed columns, treats logic as a pure evaluable graph, summarizes open-ended responses with Claude, and picks one vertical to be excellent at will out-earn a generic Typeform clone every time. Pick the niche first, the question types second, the dashboard last.
The stack, prompts, pricing, and mistakes to avoid — for solo founders building with AI.