next up previous
Next: Parameter Estimation for HMMs Up: Hidden Markov Models Previous: Viterbi Algorithm

Forward and Backward Probabilities


 \begin{problem}
The likelihood problem.\\
{\bf {INPUT:}} A hidden Markov mode...
...ce and $k \in Q$ , compute the
probability $P(X\vert\pi_{i}=k)$ .
\end{problem}

For this we shall need some extra definitions.

Forward algorithm: Given a sequence $X=(x_{1},\ldots,x_{L})$let us denote by fk(i) the probability of emitting the prefix $(x_{1},\ldots,x_{i})$ and eventually reaching $\pi_{i}=k$ :

\begin{displaymath}f_{k}(i)=P(x_{1},\ldots,x_{i}, \pi_{i}=k)
\end{displaymath} (20)


We use the same initial values for fk(0) as was done in the Viterbi algorithm:

fbegin(0) = 1 (21)
$\displaystyle \forall_{k \neq begin} \quad f_{k}(0)$ = 0 (22)

In analogy to 6.15 we can use the recursive formula:

\begin{displaymath}f_{l}(i+1) = e_{l}(x_{i+1}) \cdot \sum_{k \in Q}
{f_{k}(i) \cdot a_{kl}}
\end{displaymath} (23)

We terminate the process by calculating:

\begin{displaymath}P(X) = \sum_{k \in Q}{f_{k}(L) \cdot a_{k,end}}
\end{displaymath} (24)

Backward algorithm: In a complementary manner we denote by bk(i) the probability of the suffix $(x_{i+1},\ldots,x_{L})$given $\pi_{i}=k$ :

\begin{displaymath}b_{k}(i)=P(x_{i+1},\ldots,x_{L}, \pi_{i}=k)
\end{displaymath} (25)


In this case, we initialize:

\begin{displaymath}\forall_{k \in Q} \quad b_{k}(L) = a_{k,end}
\end{displaymath} (26)

The recursive formula is:

\begin{displaymath}b_{k}(i) = \sum_{l \in Q}{a_{kl} \cdot e_{l}(x_{i+1}) \cdot b_{l}(i+1)}
\end{displaymath} (27)

We terminate the process by calculating:

\begin{displaymath}P(X) = \sum_{l \in Q}{a_{begin,l} \cdot e_{l}(x_{1}) \cdot b_{l}(1)}
\end{displaymath} (28)

Complexity: All the values of fk(i) and bk(i) can be calculated in $O(L \cdot \vert Q\vert^2)$ time and stored in $O(L \cdot \vert Q\vert)$ space, as it is the case with Viterbi algorithm.

There is however one important difference: here we cannot trivially use the logarithmic weights, since (unlike in Viterbi) we do not perform only multiplication of probabilities, but we also sum probabilities. This may lead to numeric stabilization problems, unless proper measures, such as scaling the probabilities, are taken.

Using the forward and backward probabilities we can compute the value of $P(\pi_{i}=k \vert X)$. Since the process only has memory of length 1, there is a dependency only on the last state, so we can write:

 \begin{displaymath}
\begin{split}
P(X,\pi_{i}=k) &= P(x_{1},\ldots,x_{i},\pi_{...
...\vert \pi_{i}=k) = \\
&= f_{k}(i) \cdot b_{k}(i)
\end{split}\end{displaymath} (29)

Using the definition of conditional probability, we obtain the solution to the likelihood problem:

 \begin{displaymath}
P(\pi_{i}=k \vert X) = \frac{P(X,\pi_{i}=k)}{P(X)} = \frac{f_{k}(i) \cdot b_{k}(i)}{P(X)}
\end{displaymath} (30)

where


\begin{displaymath}P(X) = \sum_{l \in Q}{a_{begin,l} \cdot e_{l}(x_{1}) \cdot b_{l}(1)}
\end{displaymath} (31)


next up previous
Next: Parameter Estimation for HMMs Up: Hidden Markov Models Previous: Viterbi Algorithm
Peer Itsik
2000-12-19