https://github.com/amber-abuah/ngram-text-generation
Text generation for autocomplete using N-Grams and Maximum Likelihood Estimators.
https://github.com/amber-abuah/ngram-text-generation
mle ngram-language-model ngrams nlp nltk streamlit
Last synced: 7 months ago
JSON representation
Text generation for autocomplete using N-Grams and Maximum Likelihood Estimators.
- Host: GitHub
- URL: https://github.com/amber-abuah/ngram-text-generation
- Owner: Amber-Abuah
- Created: 2024-11-10T21:06:21.000Z (11 months ago)
- Default Branch: main
- Last Pushed: 2024-11-10T23:13:12.000Z (11 months ago)
- Last Synced: 2025-01-10T20:43:01.471Z (9 months ago)
- Topics: mle, ngram-language-model, ngrams, nlp, nltk, streamlit
- Language: Python
- Homepage: https://ngram-text-generation-amber-abuah.streamlit.app/
- Size: 10.7 KB
- Stars: 0
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
## Text Autocomplete Using N-Grams
A Streamlit app capable of auto-completing text from a given context.
The app uses three N-grams: a Bigram, Trigram and Fourgram to generate further text from a context provided by the user.
The predicted following words are generated by Maximum Likelihood Estimator (MLE) models, trained on each N-gram.Below is an example of an autocomplete of the sentence _'I was in awe when I noticed'_ from each N-gram:
>**Bigram:** I was in awe when I noticed her friends in last...
>**Trigram:** I was in awe when I noticed by the window. Emma then looked up; but as long as...
>**Fourgram:** I was in awe when I noticed her father's house, he pleased them all.
Training data: [Emma](https://www.gutenberg.org/ebooks/19839) and [Persuasion](https://www.gutenberg.org/ebooks/105) by Jane Austen from the [Gutenburg Corpus](https://www.gutenberg.org/).
Libraries: `NLTK`, `Streamlit`.