Modes

AI can operate in different modes to give users quick access to change the model and training configurations based on their intent. Each mode represents a distinct way the AI behaves, the type of results it produces, or the features that are available.

By selecting a mode, users can align the system with their current task, whether that’s deep research, learning, or general conversation. Modes can be set at the start of an interaction or toggled as needed.

Types of modes

As AI tools expand in scope, modes have become essential for managing complexity. A single model may be capable of casual conversation, academic research, code generation, and structured workflows, but each requires different balances of reasoning, context, and output style. Modes make those differences explicit and actionable.

Common examples include:

  • Open conversation: The default mode of most AI tools, optimized for flexible back-and-forth.
  • Deep research: Runs longer, more compute-intensive queries to surface synthesized insights with citations rather than shallow answers.
  • Study or tutor mode: Provides instructive, step-by-step explanations optimized for learning, often scaffolding reasoning.
  • Copilot: Opens a canvas or IDE where the user and AI collaborate on an asset.
  • Build vs. chat: Within copilot experiences, tools like Bolt let users toggle between chat for open discussion and build mode for structured creation.
  • Creative modes: Offer stylistic variance (for example, “concise vs. elaborate” or “factual vs. imaginative”) as selectable states.
  • Agentive or operator mode: Lets the AI take control of tasks or interfaces, operating from a shared canvas or orchestration layer.
  • Specialized domains: Many products expose domain-specific modes, such as “legal brief,” “math solver,” or “design critique,” each tuned for a narrow set of workflows.
The iconography of modes demonstrates the variety of implementations and metaphors used, even across similar products

Design considerations

Modes are more than cosmetic. They influence:

  • Model behavior: Context length, reasoning depth, or system prompts may be altered per mode.
  • Output type: A research mode may return structured evidence, while a casual chat mode offers plain summaries.
  • Feature set: Attachments, plugins, or integrations may be enabled or hidden depending on mode.
  • Cost and performance: Some modes consume more tokens, GPU cycles, or latency, raising tradeoffs for users and providers.
  • User expectations: Switching modes sets a promise. If the AI is in research mode, people expect rigor and traceability. If it’s in creative mode, they expect variation and freedom.

Ultimately, modes let users decide how much power, risk, or creativity they want to unleash at a given moment. Designing them well means balancing flexibility with clarity, making sure users always know what state they’re in, and what they can expect the AI to deliver.

Design considerations

  • Treat a mode as a contract, not a theme. A mode changes how the system interprets input and what the same action does, so the promise must be explicit and stable. If behavior drifts outside that promise, users experience classic mode errors and lose trust.
  • Design explicit entry and exit paths. Make switching intentional and make exiting obvious. Borrow from selection/edit-mode patterns: clear affordances to enter, visible state while active, and predictable ways to leave. This reduces accidental state carryover.
  • Reconfigure the surface when the mode changes. If the behavior changes, the tools should too. In research-like modes, foreground evidence and citations, and de-emphasize styling controls. In creative or build modes, expose composing and variation controls. Copilot’s inline vs. chat views illustrate how surfaces can reconfigure around task intent.
  • Set inheritance rules and stick to them. Decide which parts of the session carry across modes, for example memory, attachments, or citations, and which reset, like tone or formatting. Unclear inheritance creates the conditions for mode errors. Make the rules visible at switch time.
  • Balance defaults, routing, and manual control. Most people will stay in the default. Offer a safe, versatile default and optional auto-routing, with a clear override. Microsoft’s Creative, Balanced, Precise styles show how a default anchors behavior while alternatives remain available
  • Make modes discoverable, but add guardrails for costly states. Place high-value modes where they can be found, yet preview tradeoffs like longer runs, higher token use, or narrower sources. Perplexity’s Focus modes and academic filter demonstrate scoped, labeled states tuned to user intent.

Examples

Cursor’s modes change how the IDE responds to their input, pausing new code generation and switching to conversation on demand
Duolingo Max supports “explain my answer” mode, opening a conversational interface where the user can understand the logic behind the lesson
Gemini follows the convention of allowing modes to be toggled on and off from the input box at the start, or in the flow of conversation
Perplexity modes are presented as tabs in its initial CTA, and include web search, deep research, and labs (build)
Replit hosts a large assortment of modes to adapt to whether the user is getting started, editing, or understanding code