Citations

Citations give users a way to see where an AI’s response comes from. They connect a generated output back to its underlying material, whether that’s a PDF, a transcript, a web page, or an internal database. Their role is to create transparency and help users verify information.

Citations also influence the user's perception of AI as an authority, lending credibility to the summarized content. This can backfire, since summaries are susceptible to hallucinations or misrepresentations.

Encourage users to dig into cited content by providing metadata and summaries of the referenced material, which also helps weed out hallucinated sources. To counteract the AI drawing erroneous facts and conclusions from the sourced material, deep link to the relevant part of the source or pull quotes up into the summary, such as when hovering on a citation.

Variations and forms

  • Inline highlights: Best for attached material like PDFs. Example: Adobe Acrobat cites passages directly in a summary panel so users can jump to the source text.
  • Direct quotations: For longer texts or transcripts, show the specific quote the model relied on to generate its summary. Example: Granola paraphrases or directly quotes the segments of the transcript behind each takeaway.
  • Multi-source references: Found in search and aggregation. Example: Perplexity shows citations inline, with metadata such as title and favicon to help scan relevance.
  • Lightweight links: Best for verification rather than exploration. Example: Copy.ai lists full URLs at the end of generated copy, prioritizing transparency over polish.

Each variation comes with tradeoffs. Inline highlights and direct quotations are useful to verify specific information, but can prematurely converge on a single source or concept when drawing from a broader reference set. Multi-source references and lightweight links are useful to lead users deeper into a topic or large set of content, but are less reliable as the source material is more likely to be external and unverified.

Design considerations

  • Let context guide specificity. Point to exact passages, timestamps, or assets rather than broad documents when information is being presented as factual and specific. Rely on broader reference collections when the user‘s intent is grounded in broader discovery.
  • Place citations where people expect them. Inline cues work for sentence-level claims. Panels or drawers work better for long-form exploration. Consistency matters more than polish so users know where to look.
  • Respect the user‘s focus. Allow users to hover for a short preview or click through to the full source. This dual mode balances speed with thoroughness.
  • Make broken or missing citations explicit. If a source is unavailable, call it out directly rather than hiding the gap. Empty states maintain trust, filler content erodes it.
  • Use metadata to support scanning. Titles, site names, and favicons help users quickly judge relevance. Too much detail can overwhelm, so keep labels short and consistent.
  • Let users rescope references. After an initial generation, users may want to filter by domain, remove weak sources, or attach new files. Give them controls to reshape citations without restarting the whole process.

Examples

Dovetail helps people reviewing the summary of UX research to trace insights back to their source, such as a user interview
Intercom’s Fin Copilot provides citations to help customers understand where the agent’s responses are coming from to better understand policies and justifications for decisions
Granola provides a peek into the cited content when hovering on a citation, paraphrasing the relevant parts of the transcript and providing direct quotes.
Perplexity popularized the notion of using citations in search, and changed the paradigm of results as an ordered list to results as a synthesis of information