In [2]:
import math  # Just ignore this :-)

ML - Week 10 - Practical Exercises

In the exercise below, you will see an example of how a HMM can be represented, and you will implement and experiment with the computation of the joint probability and various decodings as explained in the lectures in week 8.

1 - Representing a HMM

We can represent a HMM as a triple consisting of three matrices: a $K \times 1$ matrix with the initial state probabilities, a $K \times K$ matrix with the transition probabilities and a $K \times |\Sigma|$ matrix with the emission probabilities. In Python we can write the matrices like this:

In [3]:
init_probs_7_state = [0.00, 0.00, 0.00, 1.00, 0.00, 0.00, 0.00]

trans_probs_7_state = [
    [0.00, 0.00, 0.90, 0.10, 0.00, 0.00, 0.00],
    [1.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00],
    [0.00, 1.00, 0.00, 0.00, 0.00, 0.00, 0.00],
    [0.00, 0.00, 0.05, 0.90, 0.05, 0.00, 0.00],
    [0.00, 0.00, 0.00, 0.00, 0.00, 1.00, 0.00],
    [0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 1.00],
    [0.00, 0.00, 0.00, 0.10, 0.90, 0.00, 0.00],
]

emission_probs_7_state = [
    #   A     C     G     T
    [0.30, 0.25, 0.25, 0.20],
    [0.20, 0.35, 0.15, 0.30],
    [0.40, 0.15, 0.20, 0.25],
    [0.25, 0.25, 0.25, 0.25],
    [0.20, 0.40, 0.30, 0.10],
    [0.30, 0.20, 0.30, 0.20],
    [0.15, 0.30, 0.20, 0.35],
]

How do we use these matrices? Remember that we are given some sequence of observations, e.g. like this:

In [4]:
obs_example = 'GTTTCCCAGTGTATATCGAGGGATACTACGTGCATAGTAACATCGGCCAA'

To make a lookup in our three matrices we have to translate each symbol in the string to an index.

In [5]:
def translate_observations_to_indices(obs):
    mapping = {'a': 0, 'c': 1, 'g': 2, 't': 3}
    return [mapping[symbol.lower()] for symbol in obs]

Let's try to translate the example above using this function:

In [6]:
obs_example_trans = translate_observations_to_indices(obs_example)
In [7]:
obs_example_trans
Out[7]:
[2,
 3,
 3,
 3,
 1,
 1,
 1,
 0,
 2,
 3,
 2,
 3,
 0,
 3,
 0,
 3,
 1,
 2,
 0,
 2,
 2,
 2,
 0,
 3,
 0,
 1,
 3,
 0,
 1,
 2,
 3,
 2,
 1,
 0,
 3,
 0,
 2,
 3,
 0,
 0,
 1,
 0,
 3,
 1,
 2,
 2,
 1,
 1,
 0,
 0]

Use the function below to translate the indices back to observations:

In [8]:
def translate_indices_to_observations(indices):
    mapping = ['a', 'c', 'g', 't']
    return ''.join(mapping[idx] for idx in indices)
In [9]:
translate_indices_to_observations(translate_observations_to_indices(obs_example))
Out[9]:
'gtttcccagtgtatatcgagggatactacgtgcatagtaacatcggccaa'

Now each symbol has been transformed (predictably) into a number which makes it much easier to make lookups in our matrices. We'll do the same thing for a list of states (a path):

In [10]:
def translate_path_to_indices(path):
    return list(map(lambda x: int(x), path))

def translate_indices_to_path(indices):
    return ''.join([str(i) for i in indices])

Given a path through a HMM, we can now translate it to a list of indices:

In [11]:
path_example = '33333333333321021021021021021021021021021021021021'

translate_path_to_indices(path_example)
Out[11]:
[3,
 3,
 3,
 3,
 3,
 3,
 3,
 3,
 3,
 3,
 3,
 3,
 2,
 1,
 0,
 2,
 1,
 0,
 2,
 1,
 0,
 2,
 1,
 0,
 2,
 1,
 0,
 2,
 1,
 0,
 2,
 1,
 0,
 2,
 1,
 0,
 2,
 1,
 0,
 2,
 1,
 0,
 2,
 1,
 0,
 2,
 1,
 0,
 2,
 1]

Finally, we can collect the three matrices in a class to make it easier to work with our HMM.

In [12]:
class hmm:
    def __init__(self, init_probs, trans_probs, emission_probs):
        self.init_probs = init_probs
        self.trans_probs = trans_probs
        self.emission_probs = emission_probs

# Collect the matrices in a class.
hmm_7_state = hmm(init_probs_7_state, trans_probs_7_state, emission_probs_7_state)

# We can now reach the different matrices by their names. E.g.:
hmm_7_state.trans_probs
Out[12]:
[[0.0, 0.0, 0.9, 0.1, 0.0, 0.0, 0.0],
 [1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
 [0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0],
 [0.0, 0.0, 0.05, 0.9, 0.05, 0.0, 0.0],
 [0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0],
 [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0],
 [0.0, 0.0, 0.0, 0.1, 0.9, 0.0, 0.0]]

For testing, here's another model (which we will refer to as the 3-state model).

In [13]:
init_probs_3_state = [0.10, 0.80, 0.10]

trans_probs_3_state = [
    [0.90, 0.10, 0.00],
    [0.05, 0.90, 0.05],
    [0.00, 0.10, 0.90],
]

emission_probs_3_state = [
    #   A     C     G     T
    [0.40, 0.15, 0.20, 0.25],
    [0.25, 0.25, 0.25, 0.25],
    [0.20, 0.40, 0.30, 0.10],
]

hmm_3_state = hmm(init_probs_3_state, trans_probs_3_state, emission_probs_3_state)

2 - Validating a HMM (and handling floats)

Before using the model we'll write a function to validate that the model is valid. That is, the matrices should have the right dimensions and the following things should be true:

  1. The initial probabilities must sum to 1.
  2. Each row in the matrix of transition probabilities must sum to 1.
  3. Each row in the matrix of emission probabilities must sum to 1.
  4. All numbers should be between 0 and 1, inclusive.

Write a function validate_hmm that given a models returns True if the model is valid, and False otherwise:

In [14]:
def validate_hmm(model):
    pass

We can now use this function to check whether the example model is a valid model.

In [15]:
validate_hmm(hmm_7_state)

You might run into problems related to summing floating point numbers because summing floating point numbers does not (always) give the expected result as illustrated by the following examples. How do you suggest to deal with this?

In [1]:
0.15 + 0.30 + 0.20 + 0.35
Out[1]:
0.9999999999999999

The order of the terms matter.

In [2]:
0.20 + 0.35 + 0.15 + 0.30
Out[2]:
1.0

Because it changes the prefix sums

In [3]:
0.15 + 0.30
Out[3]:
0.44999999999999996
In [4]:
0.20 + 0.35 + 0.15
Out[4]:
0.7000000000000001
In [5]:
0.15 + 0.30
Out[5]:
0.44999999999999996

On should never compare floating point numbers. They represent only an 'approximation'. Read more about the 'problems' in 'What Every Computer Scientist Should Know About Floating-Point Arithmetic' at:

http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html

3 - Computing the Joint Probability

Recall that the joint probability $p({\bf X}, {\bf Z}) = p({\bf x}_1, \ldots, {\bf x}_N, {\bf z}_1, \ldots, {\bf z}_N)$ of a hidden Markov model (HMM) can be compute as

$$ p({\bf x}_1, \ldots, {\bf x}_N, {\bf z}_1, \ldots, {\bf z}_N) = p({\bf z}_1) \left[ \prod_{n=2}^N p({\bf z}_n \mid {\bf z}_{n-1}) \right] \prod_{n=1}^N p({\bf x}_n \mid {\bf z}_n) $$

Implementing without log-transformation

Write a function joint_prob given a model (e.g. in the representation above) and sequence of observables, ${\bf X}$, and a sequence of hidden states, ${\bf Z}$, computes the joint probability cf. the above formula.

In [16]:
def joint_prob(model, x, z):
    pass

Now compute the joint probability of the ${\bf X}$ (x_short) and ${\bf Z}$ (z_short) below using the 7-state (hmm_7_state) model introduced above. (Remember to translate them first using the appropriate functions introduces above!)

In [17]:
x_short = 'GTTTCCCAGTGTATATCGAGGGATACTACGTGCATAGTAACATCGGCCAA'
z_short = '33333333333321021021021021021021021021021021021021'

# Your code here ...

Implementing with log-transformation (i.e. in "log-space")

Now implement the joint probability function in log space as explained in the lecture. We've given you a log-function that handles $\log(0)$.

In [18]:
def log(x):
    if x == 0:
        return float('-inf')
    return math.log(x)

def joint_prob_log(model, x, z):
    pass

Confirm that the log-space function is correct by comparing the output to the output of joint_prob.

In [ ]:
# Your code here ...

Comparison of Implementations

Now that you have two ways to compute the joint probability given a model, a sequence of observations, and a sequence of hidden states, try to make an experiment to figure out how long a sequence can be before it becomes essential to use the log-transformed version. For this experiment we'll use two longer sequences.

In [19]:
x_long = 'TGAGTATCACTTAGGTCTATGTCTAGTCGTCTTTCGTAATGTTTGGTCTTGTCACCAGTTATCCTATGGCGCTCCGAGTCTGGTTCTCGAAATAAGCATCCCCGCCCAAGTCATGCACCCGTTTGTGTTCTTCGCCGACTTGAGCGACTTAATGAGGATGCCACTCGTCACCATCTTGAACATGCCACCAACGAGGTTGCCGCCGTCCATTATAACTACAACCTAGACAATTTTCGCTTTAGGTCCATTCACTAGGCCGAAATCCGCTGGAGTAAGCACAAAGCTCGTATAGGCAAAACCGACTCCATGAGTCTGCCTCCCGACCATTCCCATCAAAATACGCTATCAATACTAAAAAAATGACGGTTCAGCCTCACCCGGATGCTCGAGACAGCACACGGACATGATAGCGAACGTGACCAGTGTAGTGGCCCAGGGGAACCGCCGCGCCATTTTGTTCATGGCCCCGCTGCCGAATATTTCGATCCCAGCTAGAGTAATGACCTGTAGCTTAAACCCACTTTTGGCCCAAACTAGAGCAACAATCGGAATGGCTGAAGTGAATGCCGGCATGCCCTCAGCTCTAAGCGCCTCGATCGCAGTAATGACCGTCTTAACATTAGCTCTCAACGCTATGCAGTGGCTTTGGTGTCGCTTACTACCAGTTCCGAACGTCTCGGGGGTCTTGATGCAGCGCACCACGATGCCAAGCCACGCTGAATCGGGCAGCCAGCAGGATCGTTACAGTCGAGCCCACGGCAATGCGAGCCGTCACGTTGCCGAATATGCACTGCGGGACTACGGACGCAGGGCCGCCAACCATCTGGTTGACGATAGCCAAACACGGTCCAGAGGTGCCCCATCTCGGTTATTTGGATCGTAATTTTTGTGAAGAACACTGCAAACGCAAGTGGCTTTCCAGACTTTACGACTATGTGCCATCATTTAAGGCTACGACCCGGCTTTTAAGACCCCCACCACTAAATAGAGGTACATCTGA'
z_long = '3333321021021021021021021021021021021021021021021021021021021021021021033333333334564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564563210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210321021021021021021021021033334564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564563333333456456456456456456456456456456456456456456456456456456456456456456456456456456456456456456456456456456456456456456332102102102102102102102102102102102102102102102102102102102102102102102102102102102102102102102103210210210210210210210210210210210210210210210210210210210210210'

Now compute the joint probability with joint_prob the 7-state (hmm_7_state) model introduced above, and see when it breaks (i.e. when it wrongfully becomes 0). Does this make sense? Here's some code to get you started.

In [20]:
for i in range(0, len(x_long), 100):
    x = x_long[:i]
    z = z_long[:i]
    
    x_trans = translate_observations_to_indices(x)
    z_trans = translate_path_to_indices(z)
    
    # Make your experiment here...

In the cell below you should state for which $i$ computing the joint probability (for the two models considered) using joint_prob wrongfully becomes 0.

Your answer here:

For the 7-state model, joint_prob becomes 0 for i = ? .

4 - Viterbi Decoding

Below you will implement and experiment with the Viterbi algorithm. The implementation has been split into three parts:

  1. Fill out the $\omega$ table using the recursion presented at the lecture.
  2. Find the state with the highest probability after observing the entire sequence of observations.
  3. Backtrack from the state found in the previous step to obtain the optimal path.

We'll be working with the 7-state model (hmm_7_state) and the helper function for translating between observations, hidden states, and indicies, as introduced above.

Additionally, you're given the function below that constructs a table of a specific size filled with zeros.

In [ ]:
def make_table(m, n):
    """Make a table with `m` rows and `n` columns filled with zeros."""
    return [[0] * n for _ in range(m)]

You'll be testing your code with the same two sequences as above, i.e:

In [24]:
x_short = 'GTTTCCCAGTGTATATCGAGGGATACTACGTGCATAGTAACATCGGCCAA'
z_short = '33333333333321021021021021021021021021021021021021'
In [25]:
x_long = 'TGAGTATCACTTAGGTCTATGTCTAGTCGTCTTTCGTAATGTTTGGTCTTGTCACCAGTTATCCTATGGCGCTCCGAGTCTGGTTCTCGAAATAAGCATCCCCGCCCAAGTCATGCACCCGTTTGTGTTCTTCGCCGACTTGAGCGACTTAATGAGGATGCCACTCGTCACCATCTTGAACATGCCACCAACGAGGTTGCCGCCGTCCATTATAACTACAACCTAGACAATTTTCGCTTTAGGTCCATTCACTAGGCCGAAATCCGCTGGAGTAAGCACAAAGCTCGTATAGGCAAAACCGACTCCATGAGTCTGCCTCCCGACCATTCCCATCAAAATACGCTATCAATACTAAAAAAATGACGGTTCAGCCTCACCCGGATGCTCGAGACAGCACACGGACATGATAGCGAACGTGACCAGTGTAGTGGCCCAGGGGAACCGCCGCGCCATTTTGTTCATGGCCCCGCTGCCGAATATTTCGATCCCAGCTAGAGTAATGACCTGTAGCTTAAACCCACTTTTGGCCCAAACTAGAGCAACAATCGGAATGGCTGAAGTGAATGCCGGCATGCCCTCAGCTCTAAGCGCCTCGATCGCAGTAATGACCGTCTTAACATTAGCTCTCAACGCTATGCAGTGGCTTTGGTGTCGCTTACTACCAGTTCCGAACGTCTCGGGGGTCTTGATGCAGCGCACCACGATGCCAAGCCACGCTGAATCGGGCAGCCAGCAGGATCGTTACAGTCGAGCCCACGGCAATGCGAGCCGTCACGTTGCCGAATATGCACTGCGGGACTACGGACGCAGGGCCGCCAACCATCTGGTTGACGATAGCCAAACACGGTCCAGAGGTGCCCCATCTCGGTTATTTGGATCGTAATTTTTGTGAAGAACACTGCAAACGCAAGTGGCTTTCCAGACTTTACGACTATGTGCCATCATTTAAGGCTACGACCCGGCTTTTAAGACCCCCACCACTAAATAGAGGTACATCTGA'
z_long = '3333321021021021021021021021021021021021021021021021021021021021021021033333333334564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564563210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210210321021021021021021021021033334564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564564563333333456456456456456456456456456456456456456456456456456456456456456456456456456456456456456456456456456456456456456456332102102102102102102102102102102102102102102102102102102102102102102102102102102102102102102102103210210210210210210210210210210210210210210210210210210210210210'

Remember to translate these sequences to indices before using them with your algorithms.

Implementing without log-transformation

First, we will implement the algorithm without log-transformation. This will cause issues with numerical stability (like above when computing the joint probability), so we will use the log-transformation trick to fix this in the next section.

Computation of the $\omega$ table

In [ ]:
def compute_w(model, x):
    k = len(model.init_probs)
    n = len(x)
    
    w = make_table(k, n)
    
    # Base case: fill out w[i][0] for i = 0..k-1
    # ...
    
    # Inductive case: fill out w[i][j] for i = 0..k, j = 0..n-1
    # ...

Finding the joint probability of an optimal path

Now, write a function that given the $\omega$-table, returns the probability of an optimal path through the HMM. As explained in the lecture, this corresponds to finding the highest probability in the last column of the table.

In [ ]:
def opt_path_prob(w):
    pass

Now test your implementation in the box below:

In [ ]:
w = compute_w(hmm_7_state, translate_observations_to_indices(x_short))
opt_path_prob(w)

Now do the same for x_long. What happens?

In [26]:
# Your code here ...

Obtaining an optimal path through backtracking

Implement backtracking to find a most probable path of hidden states given the $\omega$-table.

In [ ]:
def backtrack(model, w):
    pass
In [ ]:
w = compute_w(hmm_7_state, translate_observations_to_indices(x_short))
z_viterbi = backtrack(hmm_7_state, w)

Now do the same for x_long. What happens?

In [ ]:
# Your code here ...

Implementing with log-transformation

Now implement the Viterbi algorithm with log transformation. The steps are the same as above.

Computation of the $\omega$ table

In [ ]:
def compute_w_log(model, x):
    k = len(model.init_probs)
    n = len(x)
    
    w = make_table(k, n)
    
    # Base case: fill out w[i][0] for i = 0..k-1
    # ...
    
    # Inductive case: fill out w[i][j] for i = 0..k, j = 0..n-1
    # ...

Finding the (log transformed) joint probability of an optimal path

In [ ]:
def opt_path_prob_log(w):
    pass
In [ ]:
w = compute_w_log(hmm_7_state, translate_observations_to_indices(x_short))
opt_path_prob_log(w)

Now do the same for x_long. What happens?

In [ ]:
# Your code here ...

Obtaining an optimal path through backtracking

In [ ]:
def backtrack_log(model, w):
    pass
In [ ]:
w = compute_w_log(hmm_7_state, translate_observations_to_indices(x_short))
z_viterbi_log = backtrack_log(hmm_7_state, w)

Now do the same for x_long. What happens?

In [ ]:
# Your code here ...

Does it work?

Think about how to verify that your implementations of Viterbi (i.e. compute_w, opt_path_prob, backtrack, and there log-transformed variants compute_w_log, opt_path_prob_log, backtrack_log) are correct.

One thing that should hold is that the probability of a most likely path as computed by opt_path_prob (or opt_path_prob_long) for a given sequence of observables (e.g. x_short or x_long) should be equal to the joint probability of a corersponding most probable path as found by backtrack (or backtrack_log) and the given sequence of observables. Why?

Make an experiment that validates that this is the case for your implementations of Viterbi and x_short and x_long.

In [ ]:
# Your code here ...

Does log transformation matter?

Make an experiment that investigates how long the input string can be before backtrack and backtrack_log start to disagree on a most likely path and its probability.

In [ ]:
# Your code here ...

5 - Posterior Decoding

If you have time, try to implement posterior decoding (with scaling, if possible) as explained in the lecture

In [ ]:
def forward(model, x):
    pass

def backward(model, x):
    pass

def posterior_decoding(model, x):
    pass

Compare the Viterbi and posterior decodings for the x_short and x_long using the 7-state model (hmm_7_state).

In [20]:
# Your code here ...