Disclosure

AI has a trust problem.

One method to help users feel more comfortable is to be upfront and clear about when they are interacting with AI. When users can opt into the experience feeling fully informed, they may be more likely to suspend their reservations long enough for you to deliver value that keeps them engaged.

Disclosure patterns label AI interactions and content so users can distinguish them. Depending on your situation, there are a few options to choose from:

  • For AI-native products, this might not be needed. Tools like Perplexity and ChatGPT are built entirely around AI features, so users will expect its presence. You can help users by separating content they have uploaded or referenced from sources that the bot captures.
  • For blended products where AI-generated or edited content is interspersed with human-created content, consider how you might label the content created by the computer. This way users don't inadvertently present AI writing as their own, and ensures you give users the agency to manage the content within their system.
  • For  AI agents label content delivered by a bot within the chat. People have mixed feelings about interacting with bots when they think they are speaking with other humans. Avoid any sticky situations or impacts to your brand by being up front. This will become even more important as Agentive AI grows in adoption.
  • In all cases, proactively inform users when they are interacting with a product powered by AI, particularly if their data can be collected and they don't have the ability to opt out.

Different approaches

Intercom's Fin explicitly labels the messages sent by the AI. When the conversation is passed to a human, the inline badge on the individual message persists. A user can work backwards in their conversation to see exactly when they started talking to a real person.

Intercom's Fin uses a clear label of "AI" to make computer-generated responses stand out clearly.

iA Writer explicitly differentiates between AI-authored text and human-authored text. Copy will appear grey if it was brought in from an AI source, and only be set in the standard high contrast text color when a human has revised it.

IA Writer differentiates content written by AI through the use of text color

Disclosures can take many forms to signify AI-powered activity or content:

  • Bot and assistant labeling. Names, avatars, and badges that distinguish nonhuman actors in chat, comments, notifications, and support.
  • Feature-level disclosure. Inline chips like “AI Assist” or icons that signify AI actions can clue the user into actions guided by models instead of only the underlying software.
  • Output attribution. Watermarks or badges like “AI-generated,” “AI-edited,” or “Summarized with AI” make AI-generated content distinct.

Dark patterns

On the other extreme are tools that allow no opt-out functionality, such as Meta's suite of tools. They have the benefit of gaining access to more training data, but at the cost of further degrading their trust and integrity with the public. For a company as large as theirs, perhaps the means justify the end. It's up to you to determine if you are willing to take that risk at the cost of user autonomy.

Design considerations

  • Name the actor, every time. Use a clear sender line and a distinct avatar for AI messages, and keep that identity persistent across handoffs. A simple “AI” tag besides an AI Agent's name can help avoid confusion or frustration for users.
  • Label the action, not just the feature. Disclosures with verbs like “Summarized with AI” or “Rewrote with AI” are more informative than a generic “AI” signifier. These set accurate expectations for users to alert them about what must be verified.
  • Use color or other styles as an affordance. Give AI-generated content or conversation a distinct look to visually differentiate it from other content on the page, such as a subtle background and a persistent “Assistant” header. Ensure this treatment is only used for AI-generated content and does not match similar treatments used for human-generated content.
  • Balance brand choices with usability. If you use a name for AI, ensure users can't confuse it with a human, and be consistent. For example, iA writer uses low-opacity text to distinguish synthetic text, which has the effect of operating as a clear affordance while maintaining their brand-oriented approach to design.
  • Don’t fake human interaction. Especially in more sensitive contexts like support, make it clear to users when they are speaking with a human vs. AI, and make it easy to get to a human when needed. This sets appropriate expectations and avoids surprises that erode trust.
  • Disclose by default for realistic synthetic media. If content could plausibly be mistaken for real people or events, require creator disclosure and add platform labels. Major platforms like YouTube and Meta both enforce labeling for realistic AI media, offering a strong user expectation baseline.
  • Allow opt out. Give users the option to opt out from interacting with AI by having it announce its presence or by requiring consent before someone is recorded. A disclosure alone may not be sufficient to capture consent.

Examples

TikTok has adopted the Content Credentials standard to automatically label AI creators or content across the platform with a disclosure in the post data