Sample response

Sample responses are lightweight outputs generated before committing to a full, time-intensive result. They give users a quick preview of what the AI intends to do, letting them confirm direction before resources are consumed or content is overwritten.

This pattern shifts control back to the user. Instead of immediately processing a long draft, large file, or expensive computation, the user sees a short proof of concept. If the sample looks misaligned, they can correct the input early. If it looks right, they can proceed with confidence.

For AI companies, sample responses conserve compute power. They prevent wasted runs that the user would likely regenerate or discard. For users, they reduce frustration by catching format or intent mismatches before they become sunk costs of time and attention.

This approach has precedents outside of AI. Zapier tests workflows with a single record before running them at scale. Spreadsheet macros often run on sample data before applying to the entire sheet. In AI, the distinction lies in the higher cost and unpredictability of generative outputs. By surfacing a controlled sample, the system makes its direction legible and lets the user remain in charge.

Context matters. A sample may be a single row in a table, a 30-second audio clip instead of a full track, or a thumbnail version of an image. In text, it may take the form of a short preview paragraph before a full draft. Each medium requires different balances of fidelity and friction. Too much previewing slows the flow, but the right sample size helps users trust the system without burdening them.

Design considerations

  • Show final intent and polish. A sample demonstrates how effectively the AI can render data or match the user's intent through their prompt.
  • Let users skip if they want. Sample runs are helpful when users want to validate their prompt before continuing, but for later iterations this extra step could be unnecessary or cumbersome. Allow users to proceed with running the prompt against all records without a sample if they prefer.
  • Show cost and time before scaling up. Provide cost estimates for the action per-record alongside the sample so users can consider the impact of the full run before it hits.
  • Default to sampling when risk or blast radius is high. If an auto-fill run is likely to result in significant compute cost or lost work, it may be appropriate first show a sample by default. This mirrors common workflow testing in analog settings where high risk actions are preceded proactively by a confirmation.
  • Show samples alongside existing records. Avoid writing over existing content by showing the sample in a panel or overlay rather than by replacing content in existing fields. Additionally, ensure users understand what data will be changed if the sample is accepted and the run continues.

Examples