Stream of Thought

The AI's Stream of Thought reveals the visible trace of how it navigated from input to answer. This might include the plan it formed, tools called, code it executed on your behalf, and the checks and decisions it made along the way. When this information is visible, the system becomes more legible and easier to trust.

The actual form of this pattern is simple and nearly universal: A bounded box, with details minimized or altogether hidden behind a click, showing the AI's logic in real time or for review when complete.

Depending on the depth and context of this task, the details of what is included may vary. Across products, you’ll see three broad expressions:

  • Human-readable plans that preview what the AI will do.
  • Execution logs that record tool calls, code, and results.
  • Compact summaries that capture its logical reasoning, insights, and decisions.

In addition to defining this content, designers must also consider how much data to show about each tool or reference called, how to signal changes in the AI's logical state, and how much detail to include in each summary.

When setting these standards for your product, consider differences between mode and intensity. Short chats rarely require deep logic, while extended thinking or coding tasks (more expensive in both cost and time) may need to show more details so users can monitor and intervene if necessary.

Design considerations

  • Show the plan before you act. Present a short, editable sequence of steps with costs, scopes, and required permissions so people can correct direction early without reading verbose logs. This lowers execution risk and sets accurate expectations.
  • Separate plan, execution, and evidence. Keep three views synchronized: what will happen, what happened, and what supports the result. Users should never wonder whether a citation or file came from a proposed step or a completed one.
  • Tailor visibility to the context. Users running complicated, computer-heavy tasks will look for full traces and logic, while users in simple conversations may require little to know visibility into the AI's process. Set defaults based on the mode, available token density, and user preferences and then offer progressive disclosure so users can dig deeper without being overwhelmed.
  • Make steps into states. Treat every step as a clear state: queued, running, waiting for approval, error, retried, completed. Pair these with subtle progress cues in text, visuals, or voice. This makes the AI’s process legible and predictable and gives users into the current state, or when states changed in retrospect.
  • Instrument for learning. Log where users intervene, which steps get retried, and which tools produce the most errors. Use this data to simplify plans, improve prompts for tool selection, or replace brittle tools.
  • Respect modality rules. In text, link outputs back to the step that created them. In code, keep analysis readable and exportable. In voice, summarize the current action and the next checkpoint in one sentence. In embodied systems, favor visual paths or intent overlays. Each medium needs its own concise, legible form.
  • Anchor around clear milestones. A great implementation shows a compact plan up front, surfaces scopes and costs, streams meaningful progress while running, and delivers a shareable report at the end. Each stage reduces uncertainty and builds trust.

Examples

ChatGPT reveals its thought process during thinking tasks, showing its logical reasoning as well as its references in real time.
When in DeepResearch mode, ChatGPT stores its Chain of Thought in the right-drawer, which is available for review once the generation is complete.
V0 follows a similar path, exposing its logic inline until it is ready to start building, after which the remaining steps are visible from the left drawer while the app builds the codebase in the main canvas.