Working with AI feels like a black box. Having the AI show its work helps the user understand its logic, intervene when necessary, or even learn from its approach.

Showing your steps is a human pattern that builds trust. Let's say you managed somebody and had them take on a task that they hadn't performed before. You might ask them to tell you their strategy to address it, and check in with you, because they turned around to execute on that plan. Trust works in both ways, though. If that person was feeling lost, you could coach them by showing how you would approach them, and then empower them to put that plan into play.

Both of these scenarios show up in apps using this pattern today.

Show my work

RAG apps like and Perplexity transparently show the steps they took to reach a response. The AI does not have the user confirm before they proceed, but this does offer a moment for the user to stop the generation with controls and re-orient the AI. In these scenarios, the AI is providing footprints that the user can use later to understand how the AI reached the logic of its response, and tune their input more appropriately if needed.

Julius and Perplexity both show their work as they process a prompt

Check my work

For more complicated tasks, particularly those assigned to a copilot, the user is given an opportunity to approve the AI's approach before putting it into play. Github Copilot shows all of the files it will touch and impact, decreasing the risk for a mistake. Zapier provides a similar level of transparency for AI-driven zaps.

For more regularly repeated assignments, consider using a workflow instead, so the AI can move quickly in the background knowing that the user has already approved their route.

Details and variations

  • Plans can be implemented automatically or require a user's permission to implement
  • All plans should show the different steps the AI plans to take or has taken to complete the response
  • The AI may include followups during the plan to make sure it fully understands the users' intent
  • The progress should be visible to the user, with controls to stop if it if they want



Transparency before action

Plans of action build user trust and retain their autonomy over the AI. It also helps users understand the paths the AI is taking through tokens and through data, which could potentially help users learn something new - especially in a learning context. For example, consider an AI Copilot intended to help onboard people into a company. The user could ask it a question like "How should I prepare my monthly report?" and by showing its work, the AI can teach the user where to find this information, necessary templates, and other guiding information.

Potential risks

Wasted time

For tasks that are unlikely to lead to disruptive outcomes, starting with a plan of action takes a lot of the user's time with unclear benefits. The Zapier use above is a good example of this. It's helpful to describe my intent in natural language instead of building somethign manually, but I don't need a gut check first because nothing is being turned on or off in the process. Perhaps as AI Agents get more intelligent, and are given more responsibility, this pattern will become more critical more often.

Use when:
The AI is tasked with a complicated task, like building a multi-step workflow or output from a single prompt. This way the user can understand or correct the AI's thinking before it starts building.


Perplexity shows the steps it is taking to process the user's search query
Zapier shows the user its steps to implement a workflow based on the prompt before it takes action
Github Copilot first shows the user their implementation plan. Once the user approved it, the steps to implement it are visible to the user as the AI makes the changes to the files
Julius shows it progress as it creates artifacts to answer the user's prompt, making its approach transparent
Midjourney shows the user the result as it progresses through older models until it reaches the final result
No items found.