Google DeepMind has identified a fundamental architectural limitation within Retrieval-Augmented Generation (RAG) systems that rely on dense embeddings. This limitation reveals that fixed-size embeddings cannot represent all relevant document combinations as the database scales, impacting retrieval effectiveness.
The core issue lies in the representational capacity of fixed-size embeddings. A fixed dimension embedding cannot accurately represent all possible combinations of relevant documents when the database surpasses a certain size. This limitation is rooted in principles of communication complexity and sign-rank theory.
Theoretical capacity limits based on embedding size have been established. Embeddings of 512 dimensions reach their limit around 500,000 documents. Increasing the dimensions to 1024 extends the limit to approximately 4 million documents. A further increase to 4096 dimensions raises the ceiling to 250 million documents. These limits represent best-case estimates under free embedding optimization, where vectors are directly optimized against test labels. According to the Google DeepMind report, real-world language-constrained embeddings are anticipated to fail even sooner.
To empirically demonstrate this limitation, Google DeepMind introduced the LIMIT benchmark, designed to stress-test embedders. The LIMIT benchmark includes two configurations: LIMIT full and LIMIT small. The LIMIT full configuration consists of 50,000 documents, where even strong embedders experience a collapse in performance, with recall@100 often falling below 20%. The LIMIT small configuration, comprising a mere 46 documents, still poses a challenge to models. Performance varies significantly, remaining far from reliable.
Specific results from testing the LIMIT small configuration include: Promptriever Llama3 8B achieved 54.3% recall@2 with 4096 dimensions. GritLM 7B obtained 38.4% recall@2, also with 4096 dimensions. E5-Mistral 7B reached 29.5% recall@2, utilizing 4096 dimensions. Gemini Embed achieved 33.7% recall@2 with 3072 dimensions. The research shows that even with only 46 documents, no embedder achieves full recall, emphasizing that the limitation stems from the single-vector embedding architecture itself, not solely from dataset size.
In contrast, BM25, a classical sparse lexical model, circumvents this limitation. Sparse models operate in effectively unbounded dimensional spaces, facilitating the capture of combinations that dense embeddings cannot effectively represent.
Current RAG implementations often assume that embeddings can scale indefinitely with increasing data volumes. Google DeepMind’s research demonstrates the incorrectness of this assumption, revealing that embedding size inherently constrains retrieval capacity. This constraint significantly impacts enterprise search engines managing millions of documents, agentic systems relying on complex logical queries, and instruction-following retrieval tasks where queries dynamically define relevance.
Existing benchmarks, such as MTEB, do not adequately capture these limitations because they test only a narrow subset of query-document combinations. The research team suggests that scalable retrieval requires moving beyond single-vector embeddings.
Alternatives to single-vector embeddings include Cross-Encoders, which achieve perfect recall on the LIMIT benchmark by directly scoring query-document pairs, albeit with high inference latency. Multi-Vector Models, such as ColBERT, offer more expressive retrieval by assigning multiple vectors per sequence, improving performance on LIMIT tasks. Sparse Models, including BM25, TF-IDF, and neural sparse retrievers, scale better in high-dimensional search but lack semantic generalization.
The key finding is that architectural innovation, rather than simply increasing embedder size, is essential. The research team’s analysis reveals that dense embeddings, despite their widespread use, are constrained by a mathematical limit. Dense embeddings cannot capture all possible relevance combinations once corpus sizes exceed limits tied to embedding dimensionality. This limitation is concretely demonstrated by the LIMIT benchmark, with recall@100 dropping below 20% on LIMIT full (50,000 documents) and even the best models maxing out at approximately 54% recall@2 on LIMIT small (46 documents). Classical techniques like BM25, or newer architectures such as multi-vector retrievers and cross-encoders, remain essential for building reliable retrieval engines at scale.