are now able to handle vast inputs — their context windows range between 200K (Claude) and 2M tokens (Gemini 1.5 Pro). That’s between 280 and 2800 pages of text! These massive context windows suggest that in most practical scenarios, we don’t need to worry too much about hitting LLM limits regarding the input. However, our newest research shows that this is not true. For many problems with complex context, the LLM’s effective working memory can get overloaded with relatively small inputs — far before we hit context window limits.
Our paper introduces a new theoretical model of computation to explain why this happens and shows in experiments that our theory’s predictions match real-world results. Our findings can finally explain previously reported LLM failures, such as how LLMs have an inability to detect plot holes, struggle to understand long stories, or incorrectly answer questions when documents are similar.
Below we lay out the details by answering the following questions:
- What happens if we exceed an LLM’s working memory?
- Does my task need a lot of working memory?
- What can I do if my task needs a lot of working memory?
- Why do certain tasks need a lot of working memory?
What happens if we exceed an LLM’s working memory?
Intuitively speaking, tasks that require a lot of context to answer a question correctly also require the LLM to track a lot of information. As the size of this “working set” needed to correctly reason about the answer grows, it gets more likely that the LLM will make mistakes, because it is unable to retain the relevant information in its limited working memory.
Consider the following example. Say we want to debug a certain part of someone’s code and want to figure out whether the final value of the variable x7
is “a” or “b”:
x6 = "a"
x4 = "b"
x0 = x6
x2 = x4
x3 = x0
x8 = x2
x9 = x3
x7 = x3
This variable tracking task requires a lot of context to compute an answer, since failing to attend to a line from the code can result in arriving at an incorrect answer. Running experiments with a number of frontier models on this task shows that they all regress to random guessing between the two answers as the number of variables grow:
This experiment indicates that these LLMs can keep track of at most n = 5 to 10 variables before exceeding their working memory capacity. After this, performance rapidly degrades to 50–50 random guessing.
Does my task need a lot of working memory?
So now you’re probably curious whether working memory limits might be an issue for the task you are trying to solve. The first thing we recommend is checking if the task at hand is similar to any of the tasks we theoretically analyze in our paper. We call tasks BAPO-hard if they need a lot of working memory under our BAPO model (discussed more below). Tasks we know are hard theoretically include:
- Graph reachability: May occur in complex summarization, entity tracking, variable tracking, or logical deduction
- Majority: May occur in review classification, finding a consensus opinion, etc.
- Reasoning over triples: For example, constructing answers from knowledge graphs
Likewise, you can see if your task is BAPO-easy:
- Minimum/Maximum: For example, return the most negative or positive review in a list
- Index or Needle-in-a-Haystack: E.g., find out whether a topic is discussed
Intuitively, problems where only a small piece of information needs to be tracked to answer the question have low working memory requirements (e.g., Needle-in-a-Haystack). If the answer requires almost all the input tokens and no short summary exists, the working memory requirements are high.
If your task is not on the above list, you can use your judgement to determine if there is an easy solution that doesn’t need a lot of memory, e.g., there is some easy attention-based lookup the LLM can perform to answer the question, or some way to summarize the context (without knowing the question a priori) so that your question can be answered from the summary. If not, your problem might require substantial working memory. In this case, LLMs are at risk of failing at your task, particularly as the size of the task increases (e.g., number of variables, relevant pieces of information). Don’t assume that because the answer is computable from the context, an LLM can compute it.
What can I do if my task needs a lot of working memory?
If you realize that your task at hand requires a lot of working memory and is failing often, here are a variety of fixes that are theoretically motivated to increase your chances of good performance:
- Use a reasoning-enabled model (and hope it doesn’t run out of tokens). We show that theoretically, reasoning tokens enable LLMs to solve any BAPO-hard task, however, the number of reasoning tokens required to overcome working memory limits might be extremely large (as the experiments in our paper show). And in practice, even the best reasoning models still make mistakes.
- Based on our theoretical results, you could decompose your problem into one that has a more compact intermediate representation that is less likely to exceed working memory limits. For example, instead of asking the LLM to reason over the full HTML of a webpage, provide a simplified syntax such as the rendered text only. Similarly, for RAG scenarios, it might be useful to pre-annotate or pre-combine the data in ways that makes the final answer easy to obtain from the smaller summaries.
- Finally, you can outsource working-memory-heavy pieces to an external solver or tool, e.g., instead of asking for the majority opinion directly, classify each opinion separately (BAPO-easy) and then aggregate the results in Python instead of asking the LLM.
Keep in mind that these fixes might not work for all tasks, especially when it is not clear how to decompose tasks into less working memory intensive subtasks. This is where future research can hopefully fill the gap.
Why do certain tasks need a lot of working memory?
For those interested, this section delves a little deeper into the theory from our work. To analyze which tasks need a lot of working memory, we first developed an abstract model of how transformers compute solutions. We then used the model to prove that a task is hard or easy.
As illustration, consider the task of reading a newly released long book and then answering a question about it. There are roughly two strategies humans can use after reading. If one has a large working memory and can recall all the book’s crucial information, one can answer the question straight off the top of one’s head. If one does not, and can only recall the big picture ideas, one can use this to find the rough location of relevant information in the book and flip back to the page(s) to find the answer.
Now, consider how a transformer-based LLM processes the same task. It will read over the content of the book and then compute an answer at the last position after it reads the questionª. While processing the content of the book, the LLM can attend to a few relevant locations to compute the answer (the equivalent of flipping through pages). Or it can use contextual embeddings of the book to store important facts and answer the question from them directly (the equivalent of recall). What it cannot do is go back and read the book in its entirety again with the question in mind, because causal attention allows information to only flow forward through the context window.
In this scenario, for both humans and AI, larger working memory means that there is a better chance to have stored information that will enable computing the correct answer, particularly when things get complicated. Okay, but how do we more formally define what working memory is need for LLM tasks? In our paper, we do this through the bounded attention prefix oracle (BAPO) model.
The BAPO model provides a simplified computational characterization that we can analyze theoretically to prove which problems require more or less bandwidth (i.e., working memory) for an LLM. To compute an answer, the BAPO model uses (something like) the two strategies from above:
- The BAPO model can use a prefix oracle f to send a bits of information forward ↔ Memorize information while reading
- The BAPO model can also use an attention oracle g to attend to b tokens from past tokens ↔ Flip back to pages
We then define the working memory requirements for a task as the combination of two BAPO bandwidth parameters (a, b) — the first refers to how much information is pre-computed and passed on (bandwidth a) and the second refers to how much can be looked up after the fact (bandwidth b). Why is working memory the combination of two parameters? It’s because there is a trade-off: the more information one has memorized, the less information one can look up.
If a task has constant bandwidth requirements (i.e., a,b in O(1)), then the task will likely not exceed LLM working memory size, but if a task has bandwidth requirements that depend on the size of the input (e.g., sequence or alphabet length), then it will eventually exceed the working memory limits and result in failure.
Conclusions
Working memory is an important bottleneck in transformer-based LLMs. Long before information exceeds context window size, the transformer’s ability to effectively represent and communicate this information within the window is exceeded. Current long context benchmarks strongly rely on Needle-in-a-Haystack problems, which we have shown are BAPO-easy. This means that current benchmark performance will not accurately capture performance over the full range of long-context reasoning tasks.
Tasks such as complex summarization, code tracing, or inconsistency detection are hard for LLMs according to our theoretical model. They can contain BAPO-hard subtasks leading to high working memory requirements which in turn cause failures in practice. While the recent advances in context window length have broadened the applicability of LLMs, the use of longer contexts also increases complexity of the associated tasks. This will likely increase the frequency of BAPO-hard tasks and will lead to more LLM failures.
We outlined a number of strategies to lower working memory requirements of tasks, such as reasoning tokens. However, they come with their own limitations, e.g., some tasks might need a vast number of reasoning tokens to overcome bandwidth limitations in practice. We hope that future research can provide more general solutions and perhaps even new architectures beyond transformers.
References
Footnotes
ª You may wonder whether having the question first changes the working memory requirements. No — see paper for more details.