Skip to main content
Persona Tests are structured research runs. You show a concept, message, or set of options to a persona (or a whole panel) and every respondent returns a scored, tagged, cited reaction. Aggregate results roll up into a read on how the idea landed across the audience.

Study Types

Qualitative

Open-ended reactions. Each respondent writes a verdict, summary, and reasoning in their own voice. Best for messaging, positioning, and concept feedback.

Single-Choice

Respondents pick one option from a set you provide. Returns a vote breakdown plus per-respondent reasoning for each choice. Best for A/B testing headlines, designs, or value props.

Launching a Test Run

1

Write the prompt

This is what respondents will react to — a headline, a product concept, a value prop, a chunk of landing page copy, anything.
2

Pick the study type

Qualitative for open reactions, or single-choice if you want respondents to pick an option.
3

Add options (single-choice only)

Add 2 or more options. Each option can have a label and an optional image.
4

Attach prompt images (optional)

Add up to 4 images to show alongside the prompt — useful for design mocks, screenshots, or marketing assets.
5

Pick the audience

Either select a list of persona IDs, or choose an entire panel. Replica sets inside a panel all participate.
6

Launch

Runs go through queued → running → completed (or failed). Large panels run asynchronously — you can leave the page and come back.

What Each Respondent Returns

For every respondent, the test produces:
FieldWhat It Means
Overall score0–100, higher is more positive. Color-coded in the UI (green 75+, yellow 50–74, red under 50).
SentimentPositive, neutral, or negative.
VerdictOne-line bottom-line reaction.
SummaryFirst-person reaction in the persona’s voice.
ReasoningWhy the persona reacted that way, grounded in their profile and signals.
TagsTopic/theme labels the persona reacted to — used for aggregate top-themes rollup.
Selected optionThe option they picked (single-choice only).
Dimension scoresClarity, relevance, trust, and likelihood to act — each 0–100.
Cited signalsSignal IDs the respondent referenced when reacting.

Aggregate Results

At the top of every completed run you’ll see:
  • Response totals — completed, failed, and running counts
  • Average scores — overall plus each dimension (clarity, relevance, trust, likelihood to act)
  • Sentiment distribution — positive / neutral / negative counts
  • Top tags — the themes that came up most across respondents
  • Option breakdown (single-choice only) — vote count and percentage per option, with the winner highlighted
Below the aggregate you get a sortable, filterable respondent table showing each persona’s score, sentiment, verdict, and expandable reasoning.

Saving and Sharing Results

Every completed run has two export paths:
  • Save to Knowledge — creates a structured Knowledge page with the prompt, aggregate metrics, top respondents, and breakdown by type. Keeps the finding inside BuildBetter next to other research.
  • Export to PDF — produces a formatted report with the same content for external sharing.
Run the same prompt against two different panels — e.g., prospects vs. power users — and compare the aggregate cards side by side. It’s one of the fastest ways to see where messaging resonates differently by segment.

Limits

  • Up to 100 replicas per panel member
  • Up to 4 images per test prompt
  • Single-choice tests require 2 or more options
  • No limit on qualitative study prompt length