This is a landmark achievement. Computer scientists from Tsinghua University have announced the most significant breakthrough in algorithmic efficiency for finding the shortest path in networks in over 40 years. The team has successfully developed a new algorithm that overcomes the long-standing “sorting barrier” of Dijkstra’s renowned 1959 algorithm, a cornerstone of computing that has powered everything from GPS navigation to the internet’s data routing. This development is poised to have a transformative impact on a multitude of industries, particularly in the realms of supply chain management and manufacturing.
Think of a huge map of one-way roads (a directed graph). You pick a starting city and want to know how far every other city is from you along the cheapest route. This “single-source shortest paths” task powers things like navigation, logistics, network routing, and game AI.
For decades, the go-to solution has been Dijkstra’s algorithm. It’s wonderfully reliable, but it spends a lot of effort lining up all cities by increasing distance as it goes. That “full line-up” (sorting) becomes a time sink on giant, sparse networks.
The new research shows you can get all the distances without fully lining up the cities. You only need enough order to keep moving forward—no more.
Explaining it all
Now let’s explain it from a technical point of view:
The single-source shortest path (SSSP) problem involves a directed graph G = (V, E) with n vertices and m edges, along with a non-negative real weight for each edge (Duan et al. 2025, p. 1). The goal is to find the length of the shortest path from a source vertex s to every other vertex v in the graph (Duan et al. 2025, p. 3). The standard algorithm for this task is Dijkstra’s algorithm, which, when implemented with advanced data structures like Fibonacci or relaxed heaps, has a time complexity of O(m + n log n) (Duan et al. 2025, p. 1). This algorithm operates in the comparison-addition model, where only comparisons and additions of edge weights are permitted, with each operation taking unit time (Duan et al. 2025, p. 1).
Dijkstra’s algorithm works by repeatedly extracting the vertex with the minimum known distance from a priority queue and relaxing its outgoing edges (Duan et al. 2025, p. 2). A byproduct of this process is that it sorts vertices by their distance from the source (Duan et al. 2025, p. 1). This sorting aspect creates a time bottleneck, often called the “sorting barrier,” which has been believed to be a fundamental limit for SSSP algorithms, suggesting a lower bound of Ω(n log n) (Duan et al. 2025, p. 2). Recent work confirmed that if an algorithm must output the sorted order of vertices by distance, Dijkstra’s algorithm is optimal (Duan et al. 2025, p. 1). However, the question remained whether this barrier could be broken for directed graphs if only the distances themselves were required (Duan et al. 2025, p. 1). The authors of the new study present a deterministic algorithm that achieves this, running in O(m log²/³ n) time (Duan et al. 2025, p. 1). This result is the first to break the O(m + n log n) bound on sparse graphs for this problem and is also the first deterministic algorithm to do so even for undirected graphs (Duan et al. 2025, p. 2).
The new algorithm combines principles from both Dijkstra’s algorithm and the Bellman-Ford algorithm (Duan et al. 2025, p. 2). The Bellman-Ford algorithm relaxes all edges multiple times and can find shortest paths with at most k edges in O(mk) time without needing to sort vertices (Duan et al. 2025, p. 2). The researchers’ core idea is to merge these two approaches using a recursive, divide-and-conquer technique focused on reducing the size of the “frontier,” which is the set of vertices being actively considered by the algorithm (Duan et al. 2025, p. 2).
Algorithmic framework and assumptions
The algorithm operates under the comparison-addition model (Duan et al. 2025, p. 3). For the analysis, it is assumed that the graph has a constant in-degree and out-degree (Duan et al. 2025, p. 3). The authors note that any general graph can be converted to this form through a standard transformation that preserves shortest paths while creating a new graph with O(m) vertices and edges (Duan et al. 2025, p. 3). The algorithm also maintains a distance estimate db[u] for each vertex u, which is always greater than or equal to the true shortest distance d(u) (Duan et al. 2025, p. 3). A vertex u is considered “complete” if db[u] = d(u) (Duan et al. 2025, p. 3). For simplicity of presentation, the paper assumes all paths have unique lengths, but it provides a method to enforce a total order on paths using lexicographical comparison of tuples of path length, vertex count, and the reverse sequence of vertices, so this assumption does not limit the algorithm’s generality (Duan et al. 2025, p. 4).
The Bounded Multi-Source Shortest Path (BMSSP) procedure
The main component of the algorithm is a recursive procedure named Bounded Multi-Source Shortest Path, or BMSSP (Duan et al. 2025, p. 4). This procedure is designed to find all true distances to vertices that are less than a given upper bound B and whose shortest paths depend on a set of “frontier” vertices S (Duan et al. 2025, p. 4). The algorithm is structured as a divide-and-conquer process with approximately (log n)/t recursion levels, where t := ⌊log²/³(n)⌋ is a parameter (Duan et al. 2025, p. 4). A call to BMSSP(l, B, S) takes a level l, the bound B, and the frontier set S as input (Duan et al. 2025, p. 4). It returns a new, possibly smaller, boundary B′ ≤ B and a set of vertices U that are now complete (Duan et al. 2025, p. 5). The execution can be either successful, where B′ = B, or partial, where B′ < B because the computational workload reached a predefined threshold (Duan et al. 2025, p. 5). The main algorithm starts with a single call to BMSSP with the source vertex {s} as the initial frontier and an infinite bound B (Duan et al. 2025, p. 5).
Key sub-routines
The BMSSP procedure relies on two key components to achieve its speedup (Duan et al. 2025, p. 5). The first is a sub-routine called FindPivots (Duan et al. 2025, p. 5). This function takes the frontier set S and performs a Bellman-Ford-like relaxation for k steps, where k := ⌊log¹/³(n)⌋ (Duan et al. 2025, p. 4). This process either computes the final shortest paths for vertices whose paths from S are short (fewer than k vertices) or identifies a smaller set of “pivots” P ⊆ S (Duan et al. 2025, p. 5). These pivots are vertices in S that are roots of larger shortest-path subtrees (containing at least k vertices), and the number of such pivots is bounded by |Ue|/k, where Ue is the set of vertices of interest (Duan et al. 2025, p. 6). This effectively reduces the size of the frontier that needs to be processed in subsequent recursive calls (Duan et al. 2025, p. 5).
The second component is a specialized data structure used to manage the pivots and other vertices without fully sorting them (Duan et al. 2025, p. 6). This structure supports three main operations (Duan et al. 2025, pp. 6-7):
Insert
: Adds a key-value pair in amortized O(max{1, log(N/M)}) time, where N is the number of items and M is a block size parameter (Duan et al. 2025, p. 6).BatchPrepend
: Inserts a list of new items that are all smaller than any existing items, which is efficient for handling results from recursive calls (Duan et al. 2025, p. 7).Pull
: Extracts a block of the smallest M items from the structure without sorting the entire collection, which provides the input for the next recursive call of BMSSP (Duan et al. 2025, p. 7).
This data structure is implemented using a block-based linked list combined with a self-balancing binary search tree to manage block boundaries (Duan et al. 2025, p. 7). By combining these techniques, the algorithm avoids the need to maintain a fully sorted priority queue of all n vertices (Duan et al. 2025, p. 2). Instead, it recursively partitions the problem, uses limited Bellman-Ford relaxations to find pivots, and uses a partial sorting data structure to efficiently manage the much smaller set of pivots for the next stage of the recursion (Duan et al. 2025, p. 5). The total time complexity is derived from the work done at each of the O((log n)/t) levels of recursion, which leads to the final O(m log²/³ n) bound (Duan et al. 2025, p. 14).