Follow up

In an ideal scenario, the user is so good at prompting that they can reach a great outcome on their first try. In reality, that's often not the case. Follow ups are prompts, questions, or inline actions that help users refine or extend their initial interaction with the model so the model can better understand their intent.

A well-timed follow up saves compute cycles, prevents wasted effort, and communicates that the AI is working alongside the user rather than starting over.

  • In open conversation or unstructured search, follow ups are used to probe deeper into the user's interests and needs.
  • During deep research or other compute-heavy tasks, follow ups are used to precede the generation, ensuring the AI has a thorough understanding of the user's intent to avoid wasted energy.
  • In action-oriented flows, follow ups are used as nudges and inline actions to engage the user further.

Successful follow ups can serve as a psuedo sample prompt, borrowing context from the initial request and giving the user the sense that the AI is moving forward with them. Consider combining follow ups with an Action plan to provide even more upfront transparency and control to the user.

Follow ups throughout the user lifecycle

Follow ups are especially important early in the user journey, when the AI has the least amount of information about the them, their interests, and the context they are operating in. The more AI has to guess, the more likely it will waste compute power (and the user's time) on the wrong direction.

As the AI builds its memory of the user, or when the user assists by providing attachments and other context up front, the AI has to guess less. Follow ups then become less important, and can become more personalized to the user when used.

Variations and forms

  • Conversation extenders: Suggest additional questions, topics, or actions for the user to take after completing the previous action.
  • Clarifying questions: Ask about missing information or ambiguous phrasing. Example: “Do you want results for Europe only?”
  • Depth probes: Offer to drill into a persona, scenario, or detail. Example: “Should I expand on budget trade-offs or only summarize the budget overall?”
  • Comparisons: Suggest pros and cons, alternatives, or benchmarks. Example: “Would you like to see side-by-side comparisons?”
  • Action nudges: Turn a generative result into an actionable step. Example: “Send an email draft?”
  • Share/Export options: Extend the work into other formats. Example: “Would you like me to generate a slide of this concept?”

Design considerations

  • Anchor follow ups in what just happened. Base suggested next prompts on the system’s last response or the user’s prior action. Avoid generic next steps. For instance, Perplexity’s follow ups reference specific facts from the answer to guide further exploration, which keeps continuity and trust intact.
  • Show why you’re suggesting something. Make it clear what connects the follow up to the previous exchange. Use subtle phrasing cues like “You could also ask…” or “Related topics include…” so users understand the logic behind the suggestion rather than seeing it as arbitrary automation.
  • Keep the list short and scannable.  Offer a small set of high-value follow ups, and prompt models to intelligently reserve this pattern when it’s necessary to verify certain details before proceeding, or as a way of extending a conversation.
  • Balance depth and breadth. Mix one or two “zoom in” suggestions (to refine or elaborate) with one “zoom out” option (to pivot or generalize). This gives users directional control without overwhelming them.
  • Preserve the conversational rhythm. Visually separate follow ups from the model’s main output so users can distinguish new content from next step prompts. Treat them as light invitations, not part of the generated answer.
  • Let users select. For follow ups intended to probe deeper, allow users to regenerate the list of options to choose from to explore new aspects and uncover potential next-steps they aren’t already considering.

Examples

ChatGPT demonstrates how followups can be built into the system prompt instead of relying on the interface. As voice models and interactions become more common, this approach allows the model to keep the user engaged without relying on direct input of text
Jasper includes followups within their default workflows, helping the user improve their prompt in a way that feels less overwhelming than one-shutting a complicated prompt up front
Julius allows the user to re-generate suggestions. They also rely on suggestions provided directly by the model itself. The two sets generally match, and it stands out when they don’t