Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/anshulranjan2004/pyhmm
A python implementation of isolated word recognition using Discrete Hidden Markov Model
https://github.com/anshulranjan2004/pyhmm
hidden-markov-model hmm markov-model probability python
Last synced: 30 days ago
JSON representation
A python implementation of isolated word recognition using Discrete Hidden Markov Model
- Host: GitHub
- URL: https://github.com/anshulranjan2004/pyhmm
- Owner: AnshulRanjan2004
- License: mit
- Created: 2023-07-05T18:06:53.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-04-27T19:05:38.000Z (8 months ago)
- Last Synced: 2024-04-28T19:45:52.443Z (8 months ago)
- Topics: hidden-markov-model, hmm, markov-model, probability, python
- Language: Python
- Homepage:
- Size: 43.6 MB
- Stars: 2
- Watchers: 1
- Forks: 1
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
PyHMM
=====HMM Implementation in Python
This is a simple implementation of isolated word recognition using Discrete Hidden Markov Model
Usage:
------------* Instantiate the HMM by passing the file name of the model, model is in JSON format
* Sample model files are in the models subdirectory
```pythonmodel_file_name = r"./models/coins1.json"
hmm = MyHmm(model_file_name)```
* Get the probability of a sequence of observations P(O|model) using forward algorithm
```pythonobservations = ("Heads", "Tails", "Heads", "Heads", "Heads", "Tails")
prob_1 = hmm.forward(observations)```
* Get the probability of a sequence of observations P(O|model) using backward algorithm
```python
prob_2 = hmm.backward(observations)
```
* Get the hidden states using Viterbi algorithm
```python
(prob, states) = hmm.viterbi(observations)
```
* For unsupervised learning, compute model parameters using forward-backward Baum Welch algorithm
```python
hmm.forward_backward(observations)
# hmm.A will contain transition probability, hmm.B will have the emission probability and hmm.pi will have the starting distribution
```
Note
------------For long sequences of observations the HMM computations may result in underflow. In particular when training the HMM with multiple input sequences (for example during speech recognition tasks) often results in underflows. The module myhmm_scaled can be used instead of myhmm to train the HMM for long sequences. It is important to note the following.
1. The implementation is as per Rabiner's paper with the errata addressed. See: http://alumni.media.mit.edu/~rahimi/rabiner/rabiner-errata/rabiner-errata.html
2. forward_scaled implements the scaled forward algorithm and returns log(P(Observations)) instead of P(Observations). If P(O) >= minimum floating point number that can be represented, then we can get back P(O) by math.exp(log_p). But if P(O) is smaller it will cause underflows.
3. backward_scaled implements the scaled version of backward algorithm. This is implemented for the sole purpose of computing zi, gamma for training using Baum Welch algorithm. It should always be called after executing the forward procedure (as the clist needs to be set up)
4. forward_backward_multi_scaled implements the scaled training procedure that supports multiple observation sequences.Usage of Scaled Version:
------------```python
from myhmm_scaled import MyHmmScaledmodel_file_name = r"./models/coins1.json"
hmm = MyHmmScaled(model_file_name)# compute model parameters using forward-backward Baum Welch algorithm with scaling (Refer Rabiner)
hmm.forward_backward_multi_scaled(observations)
# hmm.A will contain transition probability, hmm.B will have the emission probability and hmm.pi will have the starting distribution# get the log probability of a sequence of observations log P(O|model) using forward_scaled algorithm
observations = [("Heads", "Tails", "Heads", "Heads", "Heads", "Tails"), ("Tails", "Tails", "Tails", "Heads", "Heads", "Tails")]log_prob_1 = hmm.forward_scaled(observations[0])
log_prob_2 = hmm.forward_scaled(observations[1])
```