Connectors

Connectors establish links between an AI and external systems of record, authorizing the product or agent to read and act on data from tools like Drive, Slack, Jira, calendars, CRMs, and internal wikis.

In chat and other open text surfaces, connectors let people ask natural questions and get grounded answers from their own files, messages, and records. Connectors also power background actions like joining meetings, filing tickets, or drafting emails from context.

A well-designed connector makes its scope explicit: what sources are linked, what permissions apply, and whether the AI is retrieving content or making changes. A poor connector hides this, producing answers without showing which system they came from, or allowing actions without confirmation.

Across products, the pattern shows up in three places:

  • Account-level syncs that index sources for grounded answers in chat, like Google Drive in ChatGPT or Slack data in Slack AI.
  • App-side panels that pull context from your suite, like Gemini in Gmail, Docs, and Calendar.
  • Enterprise connectors configured by admins, such as Microsoft Graph connectors or Atlassian Rovo, that unify search and AI across sanctioned tools.

Prompt injection risks

Connecting AI to real data means it will read content that may include hidden instructions. Malicious instructions in calendar invites, emails, docs, tickets, and wiki pages can all try to steer the model. If those instructions trigger tools through a connector, the AI can exfiltrate data or make unintended changes. Be intentional with your design to secure your experience:

  • Treat connected content as untrusted. Parse and summarize first, then gate any tool use behind explicit user intent. Show when the model is about to act on instructions found in retrieved content, and require confirmation with a human-readable preview.
  • Give users simple controls to neutralize risk. Let them exclude specific sources or fields, turn off tool access for a connector, and switch a thread to “read-only” mode. Provide a per-message “Using: Drive, Jira, Slack” chip so they can pause a source mid-flow.
  • Design visible guardrails. Strip or escape prompt-like strings from retrieved content, disable function calls from quoted text, and cap what metadata can be echoed back. Log which sources influenced a proposed action, so people and admins can audit what happened.
  • Plan for failure modes. Flag suspected injection, return a safe summary instead of executing, and offer next steps like “open the source” or “ask a narrower question.” Keep a kill switch to revoke a connector instantly when something looks off.

The familiarity and ease of use that AI relies upon can cause users to let down their guard. Be mindful that designing a secure experience is necessary to build trust and build continuous access to the user's context over time.

Design considerations

  • Aim the connector. Services like Notion and GoogleDrive include many layers of data, not all of which a user might want to have accessible to the AI. Allow source pickers and scoping toggles so users can specify where the AI should look, like specific workspaces. Make boundaries tangible and adjustable before a query runs so they can be adjusted to the specific input.
  • Give connectors a clear visual identity. Represent each integration consistently—icon, color, or short label—so users form quick associations between connectors and data types. Treat connectors as visible participants in the workspace, not invisible pipelines.
  • Design graceful degradation.When a connector fails, rate-limits, or partially loads, communicate it directly in-flow (“Drive not reachable,” “Notion token expired”). Offer next actions like “Retry,” “Reconnect,” or “Attach manually.” Avoid blank states that imply success.
  • Expose freshness and state. In settings or elsewhere, display when data was last synced or fetched. If results are cached, label them as such and offer a refresh control. Freshness cues build confidence that the AI is using up-to-date information.
  • Use deep links in citations .Link AI references directly to source records in the user’s own systems so they can verify facts, explore context, and navigate their organization’s knowledge graph. This builds trust and connects synthesis back to living data..

Examples

ChatGPT places connectors prominently in the footer of the open text box, demonstrating their importance to the user experience from the company’s perspective.