One method to help users feel more comfortable is to be upfront and clear with users when they interacting with AI. When users can opt into the experience feeling fully informed, they may be more likely to suspend their reservations long enough for you to deliver value that keeps them engaged, and builds trust..
Disclosure patterns label AI interactions and content so users can distinguish them from content created by other humans or interaction patterns that don't include AI. Depending on your situation, there are a few options to choose from:
- For strictly AI products, this might not be needed. Tools like Perplexity and ChatGPT are built entirely around AI features, so users will expect its presence. You can help users by separating content they have uploaded or referenced from sources that the bot captures.
- For blended products where AI-generated or edited content is interspersed with human-created content, consider how you might label the content created by the computer. This way users don't inadvertently present AI writing as their own, and ensures you give users the agency to manage the content within their system.
- For AI agents label content delivered by a bot within the chat. People have mixed feelings about interacting with bots when they think they are speaking with other humans. Avoid any sticky situations or impacts to your brand by being up front. This will become even more important as Agentive AI grows in adoption.
- In all cases, proactively inform users when they are interacting with a product powered by AI, particularly if their data can be collected and they don't have the ability to opt out.
Different approaches
Intercom's Fin explicitly labels the messages sent by the AI. When the conversation is passed to a human, the inline badge on the individual message persists. A user can work backwards in their conversation to see exactly when they started talking to a real person.
IA Writer 7 explicitly differentiates between AI-authored text and human-authored text. Copy will appear grey if it was brought in from an AI source, and only be set in the standard high contrast text color when a human has revised it.
Limitless.ai has also taken an opinionated approach to consent when it comes to AI recording tools. Their new pendent will only capture the words spoken by others if their consent has been registered by the device. Otherwise, only the wearer's words will be recorded in their Limitless account.
Dark patterns
On the other extreme are tools that allow no opt-out functionality, such as Meta's suite of tools. They have the benefit of gaining access to more training data, but at the cost of further degrading their trust and integrity with the public. For a company as large as theirs, perhaps the means justify the end (yuck). It's up to you to determine if you are willing to take that risk at the cost of user autonomy. I guess I made my opinion known.