Caveat

Caveats remind users that an AI system may be wrong, incomplete, or biased. Critically, though this pattern is nearly ubiquitous, it remains to be seen how effective this pattern is at building habits of skepticism and critical thinking when users interact with the AI.

You've likely seen caveats, most often placed just below an open input in conversational surfaces, but this is not the only location where they may be appropriate:

  • Chatbots place a line under the input or above each output.
  • Document assistants insert caveats at the top of generated sections.
  • API responses include warning headers in metadata.
  • Voice agents may deliver spoken caveats before or after results.

In enterprise contexts, administrators may enforce stricter or more visible caveats across workspaces.

Limitations

Caveats can be helpful: For less technical users, they are simple ways to signal limitations to the product. In that regard, it’s not much different than a warning label on your hair dryer telling you not to use it in the bath.

At the same time, AI models are complex and rapidly evolving, and caveats are unlikely to be sufficient to encourage and guide users through the nuances of these models.

Furthermore, we should question whether users are already blind to this pattern given its ubiquity. Similar to the standard UX pattern of requiring that users opt into terms of service before signing up for a product, should we believe that users critically understand why a caveat exists and what it means?

Consider going a step further to help users understand and get better outcomes from the model instead of relying on caveats as a fallback. Use wayfinders to guide them to create better prompts, use references and citations to help them understand how the AI derived its response, and make your AI more transparent so users can understand what is happening behind the scenes.

Design considerations

  • Make caveats visible without being distracting. Place caveats where they naturally align with outputs, not hidden in a footer or splash screen. Users should see them at the moment of decision-making.
  • Use clear, plain language. Write in simple, easy-to-understand language. Link to more technical documentation where possible.
  • Tie caveats to context. A targeted note such as “Check dates for accuracy” is more actionable than a blanket warning. Context helps users know what to verify.
  • Design caveats as part of a broader support system. A warning alone is insufficient. Use related patterns such as Wayfinders, References, and Footprints to help users recover from or prevent errors.
  • Assume they will be ignored. Don’t presume a caveat is sufficient to guide users away from harmful behavior. Run evals on your prompts and models to proactively check against hallucinations, bias, or issues.