Researchers from the University of California, Berkeley, Stanford University, and Databricks have introduced a new method called GEPA that replaces traditional, trial-and-error learning with an AI’s own language understanding. According to a recent article summarizing the research, this approach is not only more accurate but also significantly more efficient, achieving superior results with up to 35 times fewer trial runs than established techniques.
The inefficiency of traditional reinforcement learning
Modern enterprise AI applications are often “compound AI systems,” which are complex workflows that connect multiple AI modules and external tools like databases or code interpreters. A popular way to optimize these systems is through reinforcement learning (RL), which treats the system as a black box. This method runs a task, receives a simple numerical score or “scalar reward” (e.g., 7/10), and uses this feedback to slowly adjust the model’s parameters. The primary drawback of this approach is its “sample inefficiency”.
To learn effectively from these sparse numerical scores, RL methods often require tens of thousands, or even hundreds of thousands, of trial runs, known as “rollouts”. For any real-world application involving expensive tool calls or powerful proprietary models, this process is prohibitively slow and costly. As one of the paper’s co-authors, Lakshya A Agrawal, noted, this complexity makes RL impractical for many teams, who often resort to manual prompt engineering instead. GEPA was designed to address this challenge, particularly for teams that need to optimize systems built on top-tier models that cannot be easily fine-tuned.
How GEPA uses language to learn and evolve
The GEPA (Genetic-Pareto) framework tackles the inefficiency of RL by replacing sparse numerical rewards with rich, natural language feedback. It leverages the fact that the entire execution of an AI system, including its reasoning steps, tool calls, and error messages, can be turned into text that an AI model can read and analyze. The methodology is built on three core pillars.
- Genetic prompt evolution: GEPA treats a collection of prompts like a gene pool. It iteratively “mutates” these prompts to create new, potentially better versions for the AI system to use.
- Reflection with natural language feedback: This is the key innovation. After a few trial runs, GEPA provides an AI model with the full text of what the system tried to do and what went wrong. The model then “reflects” on this feedback to diagnose the problem in plain language and write an improved prompt. For example, instead of just seeing a low score, it might analyze a compiler error and conclude the prompt needs to specify a particular library version.
- Pareto-based selection: To avoid getting stuck on a single, suboptimal solution, GEPA maintains a diverse roster of high-performing “specialist” prompts. By tracking which prompts work best on different examples, it explores a wider range of strategies and is more likely to find a solution that works well across many inputs.
The researchers evaluated GEPA across four diverse tasks and found that it substantially outperformed the RL-based method GRPO. In testing, GEPA achieved up to a 19% higher score while using up to 35 times fewer rollouts. In one concrete example, GEPA optimized a question-answering system in approximately 3 hours at a cost of less than $20, whereas the RL-based approach took 24 hours and cost about $300, representing an 8x reduction in time and a 15x reduction in cost for better results.
Beyond raw performance, GEPA-optimized systems were found to be more reliable on new, unseen data, which the researchers attribute to the richer, language-based feedback. The prompts produced by GEPA were also up to 9.2 times shorter than those from other optimizers, which reduces latency and cost in production. The researchers also noted that GEPA can be used as an “inference-time” problem solver, automatically generating and refining solutions in a continuous integration pipeline. In one experiment, this approach boosted performance on code generation tasks to an expert level on 20% of tasks, a level that was achieved on 0% of tasks by a standard single-shot attempt from GPT-4o.