I deeply thank my friend Lucian for sharing his vast knowledge of algorithms with me over the years, including the algorithm described here that I’m surprised I never heard of before.

## Introduction

The shortest path problem and its derivatives are an important part of the computer science curriculum.

The freshmen are usually puzzled by the variety of algorithms they are taught, each having their own quirks: Dijkstra’s algorithm is the favorite, Bellman–Ford can detect negative cycles and Floyd–Warshall is the best for specific use cases like computing all-pairs shortest paths in a dense graph. Sometimes they even learn about Johnson’s algorithm which is a smart combination of Dijkstra’s and Bellman–Ford.

Then they discover unsettling truths, for example that the complexity of Dijkstra’s algorithm can be improved by using Fibonacci heaps, but that it does not actually improve the actual runtime for reasonable input sizes.

Finally, they sometimes understand that the general simple shortest path problem is actually NP-hard (yes it is when there are negative cycles), as is the longest path problem (they are equivalent).

Yet there are some variations that *look* hard but are not if we make use of some clever tricks. One of these is matrix multiplication over the tropical semiring.

## Weight constrained shortest path

The Weight Constrained Shortest Path Problem (WCSPP, also called RCSPP for Resource Constrained Shortest Path Problem) associates 2 numbers to each edge of the graph (weights and costs, although they can be swapped) and requires to find a path that minimizes the sum over one number while keeping the sum over the other below a given threshold. Since it is easy to reduce the longest path problem to this one, it is also NP-hard in general. If there are no negative cycles, it is only weakly NP-hard: you can solve it with a dynamic programming approach.

However, when either the weights or the costs are unit, the problem is actually polynomial!

As a convention, let’s define the *length* $L$ of a path as the number of edges it contains (thus the sum of unit numbers), and the *weight* $W$ of a path as the sum of the weights of its edges.

I claim that we can solve all these problems:

- $\min W$ with $L = k$
- $\min W$ with $L \leq k$
- $\min L$ with $W \ge \lambda$

You can easily flip the sign on $W$ to get the maximum instead of the minimum and reverse the inequalities on $W$. You can also maximize $L$ in the last case by reversing the condition on $W$.

## Path of length $k$

The first problem is the easiest and the best illustration of the trick.

Assume that we have solved the problem for all paths of length $k-1$, and that $w_{ab}$ is the minimum weight of a path from $a$ to $b$. Then $w$ should be updated as: $w’_{ab} = \min_c w_{ac} + w_{cb}$. This should remind you of matrix multiplication!

The only difference is that we replaced multiplication by addition and addition by minimum. It turns out that this produces a semiring called the tropical semiring, and that matrix multiplication is actually well-defined over this semiring.

```
def mult(a, b):
"tropical matrix multiplication"
n = len(a)
return [
[min(a[i][j] + b[j][k] for j in range(n)) for k in range(n)]
for i in range(n)
]
```

We can also define exponentiation:

```
def pow(a, k):
"tropical matrix exponentiation"
n = len(a)
res = [[0 if i == j else float("inf") for j in range(n)] for i in range(n)]
while k:
if k & 1:
res = mult(res, a)
a = mult(a, a)
k >>= 1
return res
```

It is important to notice that we initialized the result to the identity matrix, which is the matrix with 0 on the diagonal and $\infty$ everywhere else. This is because $0$ is the neutral element for addition, and $\infty$ is the neutral element for minimum.

If $w$ is the matrix of all weights, then $w^k$ is the matrix of all weights of paths of length $k$. The solution to the problem is then $w^k_{st}$ where $s$ and $t$ are the source and the target of the path. One can also easily compute the minimum over all paths.

We are also lucky that the same trick works with $\max$ instead of $\min$, so we can also solve the maximum path of length $k$ problem with the same algorithm. Flipping the signs also works as always.

Overall, the complexity of this algorithm is $O(n^3 \log k)$, where $n$ is the number of nodes and $k$ is the length of the path.

## Path of length at most $k$

We can trick our algorithm into computing the minimum path of length at most $k$ by adding self-loops of weight $0$ to each node. Thus, we give the “possibility” to a path of length $k$ to loop over a node. The complexity is still $O(n^3 \log k)$.

## Path of weight less than $\lambda$

This is the final problem. We can of course binary search on the output of the previous problem, which would cost $O(n^3 \log^2 k)$. There is a more elegant solution that involves precomputing a list of powers of the matrix $w$ with exponents that are powers of 2 and has complexity $O(n^3 \log k)$. The code below computes the largest `exponent`

$e$ such that $w^e_{st} \lt \lambda$. Thus, `exponent + 1`

is the smallest $e$ such that $w^{e}_{st} \ge \lambda$.

```
powers = [w]
while powers[-1][start][target] < _lambda:
powers.append(mult(powers[-1], powers[-1]))
i = len(powers) - 1
exponent = 2 ** i
matrix = powers[i]
while i:
i -= 1
tmp = mult(matrix, powers[i])
if tmp[start][target] < _lambda:
exponent += 2**i
matrix = tmp
```