1. Overview

Dijkstra’s algorithm is one of the most famous algorithms for finding the shortest path. One of the main parts of Dijkstra’s algorithm is the priority queue used to traverse the nodes in a graph, and this also is the main difference between Dijkstra’s algorithm and BFS.

In this tutorial, we’ll consider three versions of this algorithm. One will use a simple priority queue that supports only enqueuing and dequeuing a minimal node. Another approach will check an existing node in a queue before enqueuing a new one, and the last approach will support decreasing the value of the key inside a help.

2. Dijkstra’s Algorithm

First, let’s refresh our memory with the idea behind Dijkstra’s algorithm. In a general implementation, Dijkstra’s algorithm returns the collection of the shortest path to all of the nodes from a source node:

algorithm DijkstrasAlgorithm(G, s):
    // INPUT
    //    G = the input graph (V, E)
    //    s = the source node
    // OUTPUT
    //    d = the shortest paths' length to all the nodes
    //    p = map of the previous nodes in the path

    for v in V[G]:
        d[v] <- +infinity
        p[v] <- undefined

    d[s] <- 0
    S <- an empty set  // visited set
    Q <- V[G]

    while Q is not empty:
        u <- vertex in Q with min d[u]

        if u not in S:
            S <- S + {u}

            for each edge (u, v) outgoing from u:
                if d[u] + w(u, v) < d[v]:
                    d[v] <- d[u] + w(u, v)
                    p[u] <- v
    
    return (d, p)

However, often we need to identify only the shortest path from a source node to an end node. In this case, we can consider the following version, which also allows us to exit the algorithm earlier, based on the condition on line 10:

algorithm Dijkstra(G, s, f):
    // INPUT
    //    G = the input graph (V, E)
    //    s = the source node
    //    f = the destination node
    // OUTPUT
    //    sp = the shortest path (ordered set of nodes)
    //    d[f] = the weight of the shortest path

    for v in V[G]:
        d[v] <- +infinity
        p[v] <- undefined

    d[s] <- 0
    S <- empty set // visited set
    Q <- V[G]

    while Q is not empty:
        u <- vertex in Q with min d[u]

        if u = f:
            break

        if u not in S:
            S <- S + {u}

            for each edge (u, v) outgoing from u:
                if d[u] + w(u, v) < d[v]:
                    d[v] <- d[u] + w(u, v)
                    p[u] <- v

    if d[f] != +infinity:
        path <- CreatePath(d, p, f)
        return (d[f], path)
    else:
        return (+infinity, empty set)

The CreatePath function on line 24 recreates the path based on the information about the weights and previous nodes. For simplicity, it’s not presented in the current algorithm, and our intro to Dijkstra explains it. All the examples in this article will be built upon this version of Dijkstra’s algorithm, as finding the path between two nodes is the most common application.

3. Naive Implementation

This implementation of the classic Dijkstra algorithm has a significant flaw. However, it’s easy to make this mistake, especially when implementing the algorithm from scratch:

algorithm DijkstrasNaive(G, s, f):
    // INPUT
    //    G = (V, E) - the input graph
    //    s = source node
    //    f = destination node
    // OUTPUT
    //    sp = the shortest path (ordered set of nodes)
    //    d[f] = the weight of the shortest path

    for v in V[G]:
        d[v] <- infinity
        p[v] <- undefined

    d[s] <- 0
    S <- an empty set // visited set
    Q <- {(s, 0)}

    while Q is not empty:
        (u, w) <- vertex in Q with minimum w

        if u = f:
            break

        if u not in S:
            S <- S + {u}

            for each edge (u, v) outgoing from u:
                if d[u] + w(u, v) < d[v]:
                    w <- d[u] + w(u, v)
                    Q <- Q + {(u, w)}
                    d[v] <- w
                    p[u] <- v

    if d[f] != infinity:
        sp <- an empty set
        n <- f
        sp <- sp + {f}

        return (sp, d[f])

    return empty set

Although this algorithm will return us the shortest path from our source node to the destination node, it would do this suboptimally. In this approach, we constantly insert new nodes into the queue instead of updating the weights of the nodes already there. As a result, the number of elements in the queue might, in the worst case, grow to the number of edges in a graph. The increased queue size worsens the time complexity of enqueuing and dequeuing.

For this case, the time complexity is O(log(E)) for both enqueue and dequeue operations. This approach would have the total time complexity as \mathbf{O(E*log(E))}. In most cases, we can see this implementation with the priority queues that store duplicate nodes and don’t allow easy access to the elements inside by their key.

4. Improving the Time Complexity

To deal with the problem explained in a previous paragraph, we need to ensure that the queue wouldn’t contain the same nodes with different weights, and there are a couple of ways to deal with this.

4.1. Eager Removal

Removing and inserting helps us mitigate the issue created by a previous implementation and would not allow having the same nodes in the queue. Instead of cleaning up the queue lazily, we can remove it when we insert a new node:

algorithm DijkstraEagerRemoval(G, s, f):
    // INPUT
    //    G = the input graph (V, E)
    //    s = the source node
    //    f = the destination node
    // OUTPUT
    //    sp = the shortest path (ordered set of nodes)
    //    d[f] = the weight of the shortest path

    S <- an empty set // visited set
    Q <- an empty priority queue

    Q.enqueue(s, 0)

    while Q is not empty:
        (u, wu) <- vertex in Q with the minimum weight wu

        if u = f:
            break

        if u not in S:
            S <- S + {u}

            for each edge (u, v) outgoing from u:
                (v, wv) <- Q.peek(v)

                if wu + w(u, v) < wv:
                    if v in Q:
                        Q.dequeue(v)

                    Q.enqueue(v, wu + w(u, v))
                    p[u] <- v

    The remaining part to construct and return the shortest path will follow
    the same procedure as shown in the general implementation.

This approach can dramatically improve the complexity of the algorithms making the time complexity for enqueue and dequeue operations \mathbf{O(log(V))}. Unfortunately, this implementation will require a more complex queue. The standard implementations of sorted queues have linear time complexity of accessing an element. Thus, to use this approach, we should ensure that element retrieval would not affect the performance.

4.2. Decrease-Key Implementation

Decreasing key weight on the nodes already in the queue is more straightforward and clear. In other words, if we found a new shorter path to a particular node, we would update this information in the out queue:

algorithm DijkstrasDecreaseKey(G, s, f):
    // INPUT
    //    G = the input graph
    //    s = the source node
    //    f = the destination node
    // OUTPUT
    //    sp = the shortest path (ordered set of nodes)
    //    d[f] = the weight of the shortest path

    S <- an empty set // visited set
    Q <- an empty priority queue

    Q.enqueue(s, 0)

    while Q is not empty:
        (u, wu) <- vertex in Q with min weight

        if u = f:
            break

        if u not in S:
            S <- S + {u}

            for each edge (u, v) outgoing from u:
                (v, wv) <- Q.peek(v)

                if wu + w(u, v) < wv:
                    if v in Q:
                        Q.update(wv for v with wu + w(u, v))

                    Q.enqueue(v, wu + w(u, v))
                    p[u] <- v

If our heap allows us to decrease a specific key, this approach would have the same time complexity as the previous one in combination with a binary heap. This implementation ensures that the queue size cannot rise above the number of nodes.

4.3. Improved Decrease-Key Implementation

Technically, the decrease-key implementation with special heap structures like Fibonacci Heaps can dramatically improve the algorithm’s performance. The implementation of the algorithm won’t change and will be the same as in the previous section, but the underlying implementation of the heap itself will be different. However, it has a greater constant factor than a Binary Heap. It also requires a more complex data structure. Additionally, the time complexity for this approach is amortized, which means that it’s average between several operations.

5. Complexity Analysis

Regarding these approaches, we can check the table for their complexities. This table won’t contain a naive approach. Here is the time complexity for operations in the following implementation:

Queue Type

Enqueue

Dequeue

Decrease-Key

TC with the Decrease-Key

TC without the Decrease-Key

Binary heap

O(\log(V))

O(\log(V))

O(\log(V))

O(E * \log(V))

O(E * \log(V))

Binomial heap

O(\log(V))

O(\log(V))

O(\log(V))

O(E * \log((V))

O(E * \log(V))

Fibonacci heap (amortized)

O(1)

O(\log(V)

O(1)

O(E + \log(V))

O(E * \log(V))

As we can see, the implementations with Decrease-Key didn’t provide significant improvement in the case of Binary and Binomial heaps. However, suppose we’re using a different priority queue implementation like Fibonacci Heap. In that case, we can significantly decrease time complexity. Although, we need to consider that the time complexity used for the Fibonacci heap is amortized and distributed between operations.

6. Conclusion

In this article, we learned that a priority queue implementation that supports the Decrease-Key operation would improve the algorithm’s complexity in specific heaps, like the Fibonacci heap. However, most languages use binary heap as the primary implementation for this data structure. Thus, the simplest way to approach this problem and to avoid the increase in the complexity for dense graphs is to use queue implementations which allow checking of existing nodes and simulate updating by removing and inserting them with a new weight.