T-61.5020 Statistical Natural Language Processing
Answers 5 -- Markov chains and Hidden Markov Models
Version 1.0
![]() |
![]() |
![]() |
|
![]() |
![]() |
||
![]() |
|||
![]() |
|||
![]() |
Let's apply the Markov assumption:
![]() |
![]() |
![]() |
|
![]() |
|||
![]() |
![]() |
![]() |
![]() |
|
![]() |
![]() |
||
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
![]() |
||
![]() |
![]() |
||
![]() |
![]() |
![]() |
|||
![]() |
![]() |
||
![]() |
![]() |
Above we used first the formula
.
Then we noted that
. Denominator and numerator
of the equation have similar terms. Let's lighten the notes using
the function
:
![]() |
![]() |
![]() |
Now we will examine how this forward probability
can be
calculated.
Initial day
First day
Second day
Third day
Now we have calculated all the neccessary quantities. Let's insert them
to the equation:
![]() |
![]() |
![]() |
|
![]() |
![]() |
||
![]() |
![]() |
Viterbi: First day
Viterbi: Second day
When counting
we note that the probabilities were same from
the states 1 and 3. We can make an arbitrary choice between them. Here
the choice was 3.
Viterbi: Third day
From the final grid we can get the most probable sequence of states:
Let's start from the most probable end state and follow the arrows
backwards to the beginning. It seems that it has been sunny, cloudy
and again sunny.
Conclusion: The differece between forward and Viterbi algorithms
The forward algorithm gives the correct probability for each state sequence. However, it cannot be used to get the most probable path from the grid.
In Viterbi search, the state probabilities are just approximations. However, it can correctly find the best path.
Computationally both of the algorithms are equally burdensome. Summing in the forward algorithm has just changed to maximization in the Viterbi algorithm.