next up previous
Next: Forward and Backward Probabilities Up: Hidden Markov Models Previous: Hidden Markov Models

   
Viterbi Algorithm

We can calculate the optimal path in a hidden Markov model using a dynamic programming algorithm. This algorithm is widely known as Viterbi Algorithm. Viterbi [10] devised this algorithm for the decoding problem, even though its more general description was originally given by Bellman [3].
Given a sequence X, denote vk(i) to be the probability of the most probable path for the prefix $(x_{1},\ldots,x_{i})$ that ends with the state k ($k \in Q$ and $1 \leq i \leq L$).
1.
Initialize:
vbegin(0) = 1 (6.12)
$\displaystyle \forall_{k \neq begin} \quad v_{k}(0)$ = 0 (6.13)

2.
For each $i=0,\ldots,L-1$ and for each $l \in Q$ recursively calculate:

 \begin{displaymath}
v_{l}(i+1) = e_{l}(x_{i+1}) \cdot \max_{k \in Q}
{\{v_{k}(i) \cdot a_{kl}\}}
\end{displaymath} (6.14)

3.
Finally, the value of $P(X\vert\Pi^{*})$ is:

\begin{displaymath}P(X\vert\Pi^{*}) = \max_{k \in Q}{\{v_{k}(L) \cdot a_{k,end}\}}
\end{displaymath} (6.15)

We can reconstruct the path $\Pi^{*}$ itself by keeping back pointers during the recursive stage and tracing them.
Complexity: We calculate the values of $O(\vert Q\vert \cdot L)$ cells of the matrix V, spending O(|Q|) operations per cell. The overall time complexity is therefore $O(L \cdot \vert Q\vert^2)$ and the space complexity is $O(L \cdot \vert Q\vert)$.
Since we are dealing with probabilities, the extensive multiplication operations we perform may result in an underflow. This can be avoided if we choose to work with logarithmic scores. We can therefore define vk(i) to be the logarithmic score of the most probable path for the prefix $(x_{1},\ldots,x_{i})$ that ends in the state k.
We shall initialize:
vbegin(0) = 0 (6.16)
$\displaystyle \forall_{k \neq begin} \quad v_{k}(0)$ = $\displaystyle -\infty$ (6.17)

The recursion will look like:

\begin{displaymath}v_{l}(i+1) = \log{e_{l}(x_{i+1})} + \max_{k \in Q}
{\{v_{k}(i) + \log(a_{kl})\}}
\end{displaymath} (6.18)

Finally, the score for the best path $\Pi^{\ast}$ is:

\begin{displaymath}Score(X,\Pi^{*}) = \max_{k \in Q}{\{v_{k}(L) + \log(a_{k,end})\}}
\end{displaymath} (6.19)


next up previous
Next: Forward and Backward Probabilities Up: Hidden Markov Models Previous: Hidden Markov Models
Itshack Pe`er
1999-01-24