Chained actions

Workflows combine multiple actions into a repeatable, predictable process. They appear in AI experiences in two forms:

  • User-authored: Users or their organizations define an explicit chain of actions that can run repeatedly over time or across records.
  • AI-reported: AI generates its own steps to complete a task, which users can observe, verify, or stop but do not explicitly build.

As agent capabilities grow, the use of experiences will shift towards the latter option, supporting workflows that the agent itself generates itself in order to explain what it did or plans to do. As this happens, interfaces will need to take on the role of being and inspection surface in addition to a construciton tool.

Unlike ad hoc prompt chaining, workflows are constructed in advance, using set instructions and variables that allow them to run with consistent outcomes over and over again. This consistency makes them a powerful tool for teams who need reliable results across repeated tasks.

Workflows reduce fragmentation. Instead of team A prompting AI to synthesize content in one way and team B writing their own prompts for a similar task, a workflow lets a single owner define the steps for everyone. Those workflows can then be reused, adapted, or templatized across an organization.

Their use also improves control over how data moves across systems. Prompts can be designed to pull information from third-party tools, transform it, and then pass it to the next stage in the sequence. This creates a centralized orchestration layer for retrieval, summarization, and structured responses.

Workflow actions

Besides for standard steps, workflows should support multiple AI-powered actions that derive best practices from their respective patterns. For example:

  • Synthesize or summarize information. Consider how to pass citations and references through for verification if needed.
  • Restyle, remix, or restructure. Ensure the workflow has sufficient context and references for the intended style the final outcome should take, like a summary sent via email to a specific recipient.
  • Transform into a different medium. Can be a raw transform, like transcribing from voice to text, or a more complicated action that requires additional details such as constructing images.
  • Verify details. Add explicit steps for the user to verify the form or substance of the generation before sending or sharing personal information.

Workflows in agentic contexts

In a shared surface, users need clear boundaries: what was specified by them versus what was improvised by the agent. Without that distinction, trust erodes because people don’t know which parts they control.

AI-powered workflows must consider views for oversight, checkpoints, and redirection. In practice, this means workflow UIs will need dual modes: “authoring mode” where humans define repeatable flows, and “reporting mode” where agents surface their own logic for users to review or override.

In this context, workflow design must consider affordances and constraints beyond its component steps.

  • How does the interface distinguish between human-authored and AI-authored steps and content?
  • Why did the AI construct a step a specific way, or take a specific action?
  • How can users intervene if the agent is off course?
  • How do humans monitor the flow of information across steps and integrated services?
  • When must the AI alert users to its actions or receive explicit verification before proceeding?

Strong system rules helps ensure workflows run seamless, even as the AI adjusts its prompting and logic on the fly.

Design considerations

  • Anchor workflows in user intent. Allow workflows to be defined from a natural language description of their intended process, then draft a structured flow the user can edit. This removes manual steps for people definite their own workflows, and operates to surface the AI's logic when constructed in the flow of an agent-driven task.
  • Expose execution costs at design time. Multi-step flows can accumulate compute and time quickly. Where possible, display projected token usage, run time, or API calls at both the workflow and step level. For AI-proposed flows, include this metadata in the agent’s report so users can weigh cost against accuracy before execution.
  • Integrate validation at the step level. Give the option to test or create a sample output for generative steps, allowing quick verification without committing to the full flow. For agent-reported workflows, the same mechanism lets users verify the agent's logic and proposed next steps before executing.
  • Mark ownership of steps unambiguously. Mixed-author flows require clear attribution. Visual markers, grouping, or labels should distinguish user-authored steps from agent-generated ones. This avoids confusion about origination and maintains user confidence in what they directly control.
  • Separate authoring and reporting views. Give users an interface explicitly intended to monitor the AI's actions, prioritizing inspection and oversight. Ensure reports summarize what the agent planned or executed and include references and logic notes for context so users can adjust.
  • Insert configurable checkpoints. Long-running or high-stakes workflows need gates where the agent is explicitly instructed to cede control to the human. Advanced settings can allow this to be bypassed but the potential risks from this must be conveyed in a direct and unambiguous manner to balance automation with safety.
  • Enable inline intervention. Allow users to make targeted edits without discarding the whole flow. When in execution, allow users to pause the run, skip a step, or adjust parameters in place. When restarting allow the workflow to pick up from the point of change rather than forcing a full restart.
  • Call out cross-system boundaries. Any handoff to external platforms (sending an email, updating a database, posting to a third-party API) should be explicitly highlighted. Show what data is leaving, what endpoint it is reaching, and under what authority, including affordances to block or confirm.
  • Expose reasoning alongside actions. When AI constructs its own workflow or logic, ensure each step carries metadata describing why it was chosen, such as input conditions, rules, or goals. This enables users to verify alignment with their intent and diagnose errors in the agent’s planning.

Examples

Cofounder‘s workflows are structured in plain text. The AI itself is responsible for breaking it down into steps in the background.
Copy.ai makes it easy to swap between modes, like directly editing the workflow, chatting with the ai copilot, or previewing the workflow before hitting launch
Eachlabs transparently shows the average cost to process the workflow as the number of steps and complications in the prompts grow
Lindy includes many supportive patterns like prompt enhancement, token spend, pointers to focus the copilot, and rich prompt editors.
Relay shows the workflow in readable, conversational text by default, and uses in-canvas hints and logic to configure