As AI becomes increasingly autonomous, humans require affordances that allow them to monitor it without disrupting its flow, and to intervene if necessary. The concept of keeping “humans in the loop principles, dates back to theroots of AI in Norbert Wiener’s pivotal Cybernetics.
The principles of this approach allow autonomy under supervision: while acting, the system must remain observable, interruptable, and accountable.
In the phsyical world, Assistants like Alexa and autonomous vehicles have already established the pattern of AI informing users when it it active or when it requires human intervention, generally in the form of color, sound, or lights.

In the digital space, small affordances that keep the user informed have a long history as well, from spinners to more recent “AI is thinking…”-type elements. These are effective for situations where the risk from errors is low, like a form submission timing out or where an AI returns a poor response in conversation. They are not sufficient for more risky endeavors, particularly as AI programs take on more actions on behalf of the user and agents begin interacting with each other.
The intensity of the human in the loop pattern is equivalent to the risk of the AI's actions. These might be set by the user, or by the underlying program. We can bucket this scale into three categories:
1. Ambient cues
These indicators inform the user that the AI is actively working. They are meant to attract attention, show what the AI is seeing, and make it clear that a user can intervene.
Perplexity’s Comet browser offers a recent example. When the AI is working in a tab, the page where it’s active has a slight inset glow. So far that’s the extent of the affordance, but one might imagine other cues–particularly color–being used to inform the user when they need to intervene, much like the colors on the GM steering wheel.

OpenAI's operator mode takes a similar approach and shows the user the browser within the context of the conversation. The AI's activity is updated in real time, and a ••• menu in the top right includes controls to allow the user to take over.

2. Stream of consciousness
Understanding how the AI is working through a task and anticipating its next moves allow the user to preempt issues that could come up before they occur. Visible reasoning and step-by-step tasks are good examples of this pattern in action. In most cases, these are ignored by the user and may be useful for retroactively debugging odd behavior. However, for more complicated or risky actions, these allow the user to actively monitor the AI in the moment, thereby maintaining their agency.

3. Review and approve
There are limits to the actions that AI can take independently. Whether set by the user or the policies of the application, when the AI hits a step that requires human intervention, it needs to alert the user that it is pausing its work. The limitation here is when the user expects the AI to fully take over, and come back later to find a task they expected to be done is halted mid-way due to an unnecessary blocker.

Overtime, we will likely see user-led rules in settings panels or agent.md files that provide instructions about when to stop during unforseen loops, similar to how workflow builders like Zapier and Relay allow these steps to be entered manually. This becomes even more complicated as agents work with each other, where a subagent may require approval from a more established or senior agent. Look to parallels in how teams of humans operate interdependently to explore what this could look like in your domain.
