This applet gives a visual demonstration of the Viterbi algorithm. It requires Java 1.5. You can try the applet by entering output sequences using the letters "H" and "T", then clicking "Apply". Of course, you can provide your own model parameters if you'd like.
Below, we'll talk more about the applet and how it works.
To understand the Viterbi Algorithm, one needs to be familiar with
Hidden Markov Models.
A Hidden
Markov Model (HMM) is a computational device the bears
a slight resemblance to finite state machines. An HMM is a
5-tuple,
Let's look at an example. This applet is pre-populated with a set of parameters to model three coins. If you've changed these, try refreshing the page so that the sample parameters re-appear.
The first coin is fair while the second and third coins are (strongly) biased. Suppose that a person picked one of the coins and tossed it. We could see that the coin came up heads (or tails), but we could not tell whether the coin was fair or biased. Over the course of several tosses, the person might switch coins. Again, we can't tell which coin is being used – all we see are heads and tails. Our applet's default parameters model each coin as a state, and the person tossing the coins is more likely to re-use the same coin than they are to switch to a different one.
The coin example illustrates the "hidden" aspect of Hidden Markov Models. All we can see are heads and tails (the outputs). We do not know which coins (states) produced them. In other workds, the outputs are visible but the states sequence is hidden.
The Viterbi algorithm gives us a way to determine the state sequence that most likely generated a particular set of output. In our coin example, the Viterbi algorithm would tell us (probabilistically) when the coin was fair and when it was biased.
Let's represent the state sequence by q_{1} ... q_{T}. At each time interval k, the Viterbi algorithm gives the probability that state q_{k} is some particular s_{i}, that we've reached state q_{k} by following a path from q_{1} ... q_{k-1}, and that we've generated the output sequence o_{1} ... o_{k}.
The algorithm computes these probabilites with the recurrence
In the applet, you'll see the output sequence shown across the top with each symbol subscripted by its position in time. Below each symbol is the set of model states. The numbers next to each state are the δ values. These are the probabilites (actually, the natural logarithms of the probabilities) computed by the recurrence above.
The Viterbi Algorithm also gives us us a way to keep track of the state sequence. This is done using a matrix I[j,k] where j = 1...N represents a state and k = 1...T represents time.
The I matrix gives us the information necessary to reconstruct the path via backtracking. In the applet, edges between states are taken directly from elements of I[j,k].
Using the applet's sample parameters, try giving an output sequence of HHHHH. You'll see that the predicted path lies strictly along state 2, the head-biased coin. If you give an output sequence of TTTTT then the applet would find a path that lies strictly along state three, the tail-biased coin.
A path of HHHHHTTTTT will start off with state 2 (the head-biased coin) and switch to state 3 (the tail-biased coin). Finally, a path of HHHHHTTTTTHHHHHTTTTTHTHTHTH will switch back and fourth between the two biased coins (states 2 and 3), but finish off with the fair coin (state 1).
This introduction to HMMs and the Viterbi algorithm has been brief, but hopefully it's given you a basic understanding of what the applet does. If you're interested in reading an authoritative (and a much more complete!) reference on Hidden Markov Models, see A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition by Lawrence Rabiner, from the Proceedings of the IEEE Volume 77, Number 2.