import math # Just ignore this :-)
In the exercise below, you will see an example of how a HMM can be represented, and you will implement and experiment with the computation of the joint probability and various decodings as explained in the lectures in week 8.
We can represent a HMM as a triple consisting of three matrices: a $K \times 1$ matrix with the initial state probabilities, a $K \times K$ matrix with the transition probabilities and a $K \times |\Sigma|$ matrix with the emission probabilities. In Python we can write the matrices like this:
init_probs_7_state = [0.00, 0.00, 0.00, 1.00, 0.00, 0.00, 0.00]
trans_probs_7_state = [
[0.00, 0.00, 0.90, 0.10, 0.00, 0.00, 0.00],
[1.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00],
[0.00, 1.00, 0.00, 0.00, 0.00, 0.00, 0.00],
[0.00, 0.00, 0.05, 0.90, 0.05, 0.00, 0.00],
[0.00, 0.00, 0.00, 0.00, 0.00, 1.00, 0.00],
[0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 1.00],
[0.00, 0.00, 0.00, 0.10, 0.90, 0.00, 0.00],
]
emission_probs_7_state = [
# A C G T
[0.30, 0.25, 0.25, 0.20],
[0.20, 0.35, 0.15, 0.30],
[0.40, 0.15, 0.20, 0.25],
[0.25, 0.25, 0.25, 0.25],
[0.20, 0.40, 0.30, 0.10],
[0.30, 0.20, 0.30, 0.20],
[0.15, 0.30, 0.20, 0.35],
]
How do we use these matrices? Remember that we are given some sequence of observations, e.g. like this:
obs_example = 'GTTTCCCAGTGTATATCGAGGGATACTACGTGCATAGTAACATCGGCCAA'
To make a lookup in our three matrices we have to translate each symbol in the string to an index.
def translate_observations_to_indices(obs):
mapping = {'a': 0, 'c': 1, 'g': 2, 't': 3}
return [mapping[symbol.lower()] for symbol in obs]
Let's try to translate the example above using this function:
obs_example_trans = translate_observations_to_indices(obs_example)
obs_example_trans
Use the function below to translate the indices back to observations:
def translate_indices_to_observations(indices):
mapping = ['a', 'c', 'g', 't']
return ''.join(mapping[idx] for idx in indices)
translate_indices_to_observations(translate_observations_to_indices(obs_example))
Now each symbol has been transformed (predictably) into a number which makes it much easier to make lookups in our matrices. We'll do the same thing for a list of states (a path):
def translate_path_to_indices(path):
return list(map(lambda x: int(x), path))
def translate_indices_to_path(indices):
return ''.join([str(i) for i in indices])
Given a path through a HMM, we can now translate it to a list of indices:
path_example = '33333333333321021021021021021021021021021021021021'
translate_path_to_indices(path_example)
Finally, we can collect the three matrices in a class to make it easier to work with our HMM.
class hmm:
def __init__(self, init_probs, trans_probs, emission_probs):
self.init_probs = init_probs
self.trans_probs = trans_probs
self.emission_probs = emission_probs
# Collect the matrices in a class.
hmm_7_state = hmm(init_probs_7_state, trans_probs_7_state, emission_probs_7_state)
# We can now reach the different matrices by their names. E.g.:
hmm_7_state.trans_probs
For testing, here's another model (which we will refer to as the 3-state model).
init_probs_3_state = [0.10, 0.80, 0.10]
trans_probs_3_state = [
[0.90, 0.10, 0.00],
[0.05, 0.90, 0.05],
[0.00, 0.10, 0.90],
]
emission_probs_3_state = [
# A C G T
[0.40, 0.15, 0.20, 0.25],
[0.25, 0.25, 0.25, 0.25],
[0.20, 0.40, 0.30, 0.10],
]
hmm_3_state = hmm(init_probs_3_state, trans_probs_3_state, emission_probs_3_state)
Before using the model we'll write a function to validate that the model is valid. That is, the matrices should have the right dimensions and the following things should be true:
Write a function validate_hmm
that given a models returns True if the model is valid, and False otherwise:
def validate_hmm(model):
pass
We can now use this function to check whether the example model is a valid model.
validate_hmm(hmm_7_state)
You might run into problems related to summing floating point numbers because summing floating point numbers does not (always) give the expected result as illustrated by the following examples. How do you suggest to deal with this?
0.15 + 0.30 + 0.20 + 0.35
The order of the terms matter.
0.20 + 0.35 + 0.15 + 0.30
Because it changes the prefix sums
0.15 + 0.30
0.20 + 0.35 + 0.15
0.15 + 0.30
On should never compare floating point numbers. They represent only an 'approximation'. Read more about the 'problems' in 'What Every Computer Scientist Should Know About Floating-Point Arithmetic' at:
http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
Recall that the joint probability $p({\bf X}, {\bf Z}) = p({\bf x}_1, \ldots, {\bf x}_N, {\bf z}_1, \ldots, {\bf z}_N)$ of a hidden Markov model (HMM) can be compute as
$$ p({\bf x}_1, \ldots, {\bf x}_N, {\bf z}_1, \ldots, {\bf z}_N) = p({\bf z}_1) \left[ \prod_{n=2}^N p({\bf z}_n \mid {\bf z}_{n-1}) \right] \prod_{n=1}^N p({\bf x}_n \mid {\bf z}_n) $$
Write a function joint_prob
given a model (e.g. in the representation above) and sequence of observables, ${\bf X}$, and a sequence of hidden states, ${\bf Z}$, computes the joint probability cf. the above formula.
def joint_prob(model, x, z):
pass
Now compute the joint probability of the ${\bf X}$ (x_short
) and ${\bf Z}$ (z_short
) below using the 7-state (hmm_7_state
) model introduced above. (Remember to translate them first using the appropriate functions introduces above!)
x_short = 'GTTTCCCAGTGTATATCGAGGGATACTACGTGCATAGTAACATCGGCCAA'
z_short = '33333333333321021021021021021021021021021021021021'
# Your code here ...
Now implement the joint probability function in log space as explained in the lecture. We've given you a log-function that handles $\log(0)$.
def log(x):
if x == 0:
return float('-inf')
return math.log(x)
def joint_prob_log(model, x, z):
pass
Confirm that the log-space function is correct by comparing the output to the output of joint_prob
.
# Your code here ...
Now that you have two ways to compute the joint probability given a model, a sequence of observations, and a sequence of hidden states, try to make an experiment to figure out how long a sequence can be before it becomes essential to use the log-transformed version. For this experiment we'll use two longer sequences.
x_long = 'TGAGTATCACTTAGGTCTATGTCTAGTCGTCTTTCGTAATGTTTGGTCTTGTCACCAGTTATCCTATGGCGCTCCGAGTCTGGTTCTCGAAATAAGCATCCCCGCCCAAGTCATGCACCCGTTTGTGTTCTTCGCCGACTTGAGCGACTTAATGAGGATGCCACTCGTCACCATCTTGAACATGCCACCAACGAGGTTGCCGCCGTCCATTATAACTACAACCTAGACAATTTTCGCTTTAGGTCCATTCACTAGGCCGAAATCCGCTGGAGTAAGCACAAAGCTCGTATAGGCAAAACCGACTCCATGAGTCTGCCTCCCGACCATTCCCATCAAAATACGCTATCAATACTAAAAAAATGACGGTTCAGCCTCACCCGGATGCTCGAGACAGCACACGGACATGATAGCGAACGTGACCAGTGTAGTGGCCCAGGGGAACCGCCGCGCCATTTTGTTCATGGCCCCGCTGCCGAATATTTCGATCCCAGCTAGAGTAATGACCTGTAGCTTAAACCCACTTTTGGCCCAAACTAGAGCAACAATCGGAATGGCTGAAGTGAATGCCGGCATGCCCTCAGCTCTAAGCGCCTCGATCGCAGTAATGACCGTCTTAACATTAGCTCTCAACGCTATGCAGTGGCTTTGGTGTCGCTTACTACCAGTTCCGAACGTCTCGGGGGTCTTGATGCAGCGCACCACGATGCCAAGCCACGCTGAATCGGGCAGCCAGCAGGATCGTTACAGTCGAGCCCACGGCAATGCGAGCCGTCACGTTGCCGAATATGCACTGCGGGACTACGGACGCAGGGCCGCCAACCATCTGGTTGACGATAGCCAAACACGGTCCAGAGGTGCCCCATCTCGGTTATTTGGATCGTAATTTTTGTGAAGAACACTGCAAACGCAAGTGGCTTTCCAGACTTTACGACTATGTGCCATCATTTAAGGCTACGACCCGGCTTTTAAGACCCCCACCACTAAATAGAGGTACATCTGA'
z_long = '3333321021021021021021021021021021021021021021021021021021021021021021033333333334564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564563210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210321021021021021021021021033334564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564563333333456456456456456456456456456456456456456456456456456456456456456456456456456456456456456456456456456456456456456456332102102102102102102102102102102102102102102102102102102102102102102102102102102102102102102102103210210210210210210210210210210210210210210210210210210210210210'
Now compute the joint probability with joint_prob
the 7-state (hmm_7_state) model introduced above, and see when it breaks (i.e. when it wrongfully becomes 0). Does this make sense? Here's some code to get you started.
for i in range(0, len(x_long), 100):
x = x_long[:i]
z = z_long[:i]
x_trans = translate_observations_to_indices(x)
z_trans = translate_path_to_indices(z)
# Make your experiment here...
In the cell below you should state for which $i$ computing the joint probability (for the two models considered) using joint_prob
wrongfully becomes 0.
Your answer here:
For the 7-state model, joint_prob
becomes 0 for i = ? .
Below you will implement and experiment with the Viterbi algorithm. The implementation has been split into three parts:
We'll be working with the 7-state model (hmm_7_state
) and the helper function for translating between observations, hidden states, and indicies, as introduced above.
Additionally, you're given the function below that constructs a table of a specific size filled with zeros.
def make_table(m, n):
"""Make a table with `m` rows and `n` columns filled with zeros."""
return [[0] * n for _ in range(m)]
You'll be testing your code with the same two sequences as above, i.e:
x_short = 'GTTTCCCAGTGTATATCGAGGGATACTACGTGCATAGTAACATCGGCCAA'
z_short = '33333333333321021021021021021021021021021021021021'
x_long = 'TGAGTATCACTTAGGTCTATGTCTAGTCGTCTTTCGTAATGTTTGGTCTTGTCACCAGTTATCCTATGGCGCTCCGAGTCTGGTTCTCGAAATAAGCATCCCCGCCCAAGTCATGCACCCGTTTGTGTTCTTCGCCGACTTGAGCGACTTAATGAGGATGCCACTCGTCACCATCTTGAACATGCCACCAACGAGGTTGCCGCCGTCCATTATAACTACAACCTAGACAATTTTCGCTTTAGGTCCATTCACTAGGCCGAAATCCGCTGGAGTAAGCACAAAGCTCGTATAGGCAAAACCGACTCCATGAGTCTGCCTCCCGACCATTCCCATCAAAATACGCTATCAATACTAAAAAAATGACGGTTCAGCCTCACCCGGATGCTCGAGACAGCACACGGACATGATAGCGAACGTGACCAGTGTAGTGGCCCAGGGGAACCGCCGCGCCATTTTGTTCATGGCCCCGCTGCCGAATATTTCGATCCCAGCTAGAGTAATGACCTGTAGCTTAAACCCACTTTTGGCCCAAACTAGAGCAACAATCGGAATGGCTGAAGTGAATGCCGGCATGCCCTCAGCTCTAAGCGCCTCGATCGCAGTAATGACCGTCTTAACATTAGCTCTCAACGCTATGCAGTGGCTTTGGTGTCGCTTACTACCAGTTCCGAACGTCTCGGGGGTCTTGATGCAGCGCACCACGATGCCAAGCCACGCTGAATCGGGCAGCCAGCAGGATCGTTACAGTCGAGCCCACGGCAATGCGAGCCGTCACGTTGCCGAATATGCACTGCGGGACTACGGACGCAGGGCCGCCAACCATCTGGTTGACGATAGCCAAACACGGTCCAGAGGTGCCCCATCTCGGTTATTTGGATCGTAATTTTTGTGAAGAACACTGCAAACGCAAGTGGCTTTCCAGACTTTACGACTATGTGCCATCATTTAAGGCTACGACCCGGCTTTTAAGACCCCCACCACTAAATAGAGGTACATCTGA'
z_long = '3333321021021021021021021021021021021021021021021021021021021021021021033333333334564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564563210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210321021021021021021021021033334564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564563333333456456456456456456456456456456456456456456456456456456456456456456456456456456456456456456456456456456456456456456332102102102102102102102102102102102102102102102102102102102102102102102102102102102102102102102103210210210210210210210210210210210210210210210210210210210210210'
Remember to translate these sequences to indices before using them with your algorithms.
First, we will implement the algorithm without log-transformation. This will cause issues with numerical stability (like above when computing the joint probability), so we will use the log-transformation trick to fix this in the next section.
def compute_w(model, x):
k = len(model.init_probs)
n = len(x)
w = make_table(k, n)
# Base case: fill out w[i][0] for i = 0..k-1
# ...
# Inductive case: fill out w[i][j] for i = 0..k, j = 0..n-1
# ...
Now, write a function that given the $\omega$-table, returns the probability of an optimal path through the HMM. As explained in the lecture, this corresponds to finding the highest probability in the last column of the table.
def opt_path_prob(w):
pass
Now test your implementation in the box below:
w = compute_w(hmm_7_state, translate_observations_to_indices(x_short))
opt_path_prob(w)
Now do the same for x_long
. What happens?
# Your code here ...
Implement backtracking to find a most probable path of hidden states given the $\omega$-table.
def backtrack(model, w):
pass
w = compute_w(hmm_7_state, translate_observations_to_indices(x_short))
z_viterbi = backtrack(hmm_7_state, w)
Now do the same for x_long
. What happens?
# Your code here ...
Now implement the Viterbi algorithm with log transformation. The steps are the same as above.
def compute_w_log(model, x):
k = len(model.init_probs)
n = len(x)
w = make_table(k, n)
# Base case: fill out w[i][0] for i = 0..k-1
# ...
# Inductive case: fill out w[i][j] for i = 0..k, j = 0..n-1
# ...
def opt_path_prob_log(w):
pass
w = compute_w_log(hmm_7_state, translate_observations_to_indices(x_short))
opt_path_prob_log(w)
Now do the same for x_long
. What happens?
# Your code here ...
def backtrack_log(model, w):
pass
w = compute_w_log(hmm_7_state, translate_observations_to_indices(x_short))
z_viterbi_log = backtrack_log(hmm_7_state, w)
Now do the same for x_long
. What happens?
# Your code here ...
Think about how to verify that your implementations of Viterbi (i.e. compute_w
, opt_path_prob
, backtrack
, and there log-transformed variants compute_w_log
, opt_path_prob_log
, backtrack_log
) are correct.
One thing that should hold is that the probability of a most likely path as computed by opt_path_prob
(or opt_path_prob_long
) for a given sequence of observables (e.g. x_short
or x_long
) should be equal to the joint probability of a corersponding most probable path as found by backtrack
(or backtrack_log
) and the given sequence of observables. Why?
Make an experiment that validates that this is the case for your implementations of Viterbi and x_short
and x_long
.
# Your code here ...
Make an experiment that investigates how long the input string can be before backtrack
and backtrack_log
start to disagree on a most likely path and its probability.
# Your code here ...
If you have time, try to implement posterior decoding (with scaling, if possible) as explained in the lecture
def forward(model, x):
pass
def backward(model, x):
pass
def posterior_decoding(model, x):
pass
Compare the Viterbi and posterior decodings for the x_short
and x_long
using the 7-state model (hmm_7_state
).
# Your code here ...