Home » Why We Might Lose Our Only Window Into How AI Thinks

Why We Might Lose Our Only Window Into How AI Thinks

A paper titled “Chain of Thought Monitorability: A New and Fragile Opportunity for AI Safety” proposes a method for improving AI safety by monitoring the internal reasoning of AI models. The research is a collaborative effort from dozens of experts across the UK AI Security Institute, Apollo Research, Google DeepMind, OpenAI, Anthropic, Meta, and several universities. It suggests that analyzing an AI’s “chain of thought,” or its process of thinking out loud in text, offers a unique but potentially fragile way to detect harmful intentions.

A window into the AI’s mind

Modern AI models can be prompted to produce a chain of thought (CoT), which is a textual output of their reasoning steps before they give a final answer. For models explicitly trained to perform extended reasoning, this CoT can act like a look into the model’s internal computational process. An automated system, or CoT monitor, could read this text to flag suspicious or harmful plans. The paper states that for very difficult tasks, AI models built on the Transformer architecture must use their chain of thought as a form of working memory, making their reasoning steps visible. In other cases, models may have a natural tendency to think out loud even when it is not strictly necessary.

This approach has already been used in safety research to find undesirable model behaviors. Researchers have observed models explicitly stating intentions in their CoT, such as “Let’s hack” or “Let’s sabotage,” which allows monitors to catch misbehavior more effectively than by only observing the final action. It can also help discover a model’s underlying goals and identify flaws in AI evaluation methods, such as when a model knows it is being tested.


AI learns language like a kid learns to read


Why this window could close

The paper warns that the ability to monitor AI chains of thought is fragile and could be lost due to future development practices. The authors identify several ways monitorability could be degraded:

  • Drift from legible reasoning: As models are trained more intensely with reinforcement learning based on outcomes rather than human-generated data, their chains of thought could drift from legible English into a more optimized, unreadable format.
  • Direct supervision of CoT: If developers directly train the chain of thought to be shorter, use a certain style, or avoid specific content, it might no longer be a faithful reflection of the model’s actual reasoning process.
  • Indirect optimization pressure: Training a model’s final output to look good to a preference model could indirectly put pressure on the CoT to also appear benign, even if the underlying reasoning is not.
  • Novel architectures: Future AI architectures might perform complex reasoning internally in a continuous latent space without needing to verbalize their thoughts, which would eliminate the safety advantages of CoT monitoring.

The authors recommend that AI researchers develop standardized evaluations to measure CoT monitorability and study how different training pressures affect it. For frontier AI developers, the paper suggests they should track the monitorability of their models, publish the results, and use these scores when making decisions about training and deployment. For example, developers might choose an earlier model checkpoint if monitorability degrades during training or justify a small decrease in monitorability if it results from a process that dramatically improves the model’s alignment.


Featured image credit

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *