Chained action

Chained actions connect multiple inputs in a single thread of work. This can support convergent workflows, aiming to reproduce similar outputs on demand, or divergent workflows, supporting creative exploration or comparison.

In this multi-step action, prompts, parameters, and tools live as nodes on an infinite board and connect through their edges, like a flow diagram. Users can map a run ahead of time, branch on any step, and see how each decision influences downstream results.

This maintains legibility and oversight. A chain that would vanish into a chat log becomes a visual map that supports side-by-side comparison, reuse, and deliberate forks.

Use this when the task benefits from structured work with multiple inputs or outputs.

Forms and characteristics of chains

  • Linear chains: Each prompt feeds into the next in a straight sequence. This is a good example to start with during onboarding, but fails to highlight the full power of the pattern.
  • Branching chains: A single prompt leads to multiple diverging paths. This causes the generation to fork into variants and compare them side by side.
  • Convergent chains: Multiple prompts are converged into a single output, such as blending a stylistic description derived from a style reference with a subject reference with the intent of retheming it.
  • Side-by-side exploration: By creating parallel chains, different prompt paths can be compared against the same input for contrast.
  • Cross-modal chaining
    Prompts don’t have to stay in the same medium. A text description can produce an image, which can then be re-prompted into a video or an audio narration.

Chained prompts in traditional workflows

The most common application of action is in traditional workflows that integrate AI steps. In this context, actions can be configured to incorporate information from data sources or user inputs as variables into AI-powered steps.

Workflows can be constructed out of all AI-enabled steps, but don't require ai-native tools. Examples include Zapier, Gumloop, and other popular workflow tools.

Chained prompts in creative workflows

Some ai-native workflow tools are designed to make generative iteration more manageable by allowing multiple variations across chained prompts to process simultaneously on the canvas.

Branches can be used to create variants, inputs can be remixed with slight differences to evaluate similar prompts for quality, and users can move seamlessly between modalities without losing sight of the underlying prompt details. Examples include FloraFauna and Weavy.

Chained prompts in agentic workflows

Visual workflows help manage the flow of information across complicated agentive jobs, serving as both a plan of action and an obervation dashboard when the agents are live. By visually tracking work in progress, users can follow the agent's logic and actions and intervene where necessary.

Agentic workflows allow users to gate information and access controls to specific subflows and tune model selections and parameter temperatures to the task, providing maximum user control.

Design considerations

  • Educate through micro copy. Writing a single prompt effectively is itself a difficult task. Teaching users how to cascade prompts together is a much more advanced skill. Use copy and other affordances to guide users on how to inject references, variables, and other context to get an effective result.
  • Make onboarding actionable. Don't simply teach the interface during onboarding. Provide users with enough space to construct a functioning, multi-step program so they can be coached on putting this pattern into practice.
  • Reward engagement with credits. The worst thing you can do to generate engagement is use up all of a user's first tranche of default credits during onboarding. Reward completion with a gift of extra credits so they have plenty of compute spend to explore the product before they need to commit to buy.
  • Make costs easy to understand. Compute costs add up for complicated workflows. Follow the pattern of showing transparent spend and estimate the credit cost for individual steps and the workflow as a whole. Make user parameter changes like model selection are also clear in terms of how they impact cost. Consider an audit function to identify compute-heavy actions.
  • Support lightweight tests. For flows that are expected to run multiple times, make sure users have a method to evaluate the flow for accuracy and effectiveness before turning it on. Ensure this is available at the step and workflow level.
  • Build with natural language. Sometimes someone has an idea of what they want to do but lack the skills in your product to build it. Allow users to describe their goal and generate a first draft of chained actions for them to modify.
  • Show errors of all types. Help users avoid low-context errors that result in hallucinations or poor results by showing affordances for prompts that need enhancement. Give options for fallback states or follow-up steps to assist in real time if the model needs additional tokens or context to operate effectively.

Examples

Cofounder lays out its workflows in plain text, trusting the ai to connect actions in the background to get a positive result
Eachlabs workflows demonstrate how chained actions allow each step to be customize at the prompt, model, and parameter level.
FloraFauna connects mutiple actions together in unlimited ways to transform and alter underlying content on an open canvas
Lindy shows the cost of each action within a workflow so users can anticipate how heavy of a compute task each step will be
Relay shows the chained actions in a top level summary that makes it easy to follow the different branches of possibilities
Weavvy’s node demonstrates the empty state of setting up different inputs of prompts and references chain together to produce an output
Gumloop follows a common pattern of letting variables from previous steps pass through as context to the AI in later steps