https://github.com/alichtman/text-language-identifier
Accurately identify written English, French or Italian text with up to 99% accuracy.
https://github.com/alichtman/text-language-identifier
bigram-model language-identification language-model linguistic-analysis n-grams text-classification-python text-processing
Last synced: 20 days ago
JSON representation
Accurately identify written English, French or Italian text with up to 99% accuracy.
- Host: GitHub
- URL: https://github.com/alichtman/text-language-identifier
- Owner: alichtman
- License: gpl-3.0
- Created: 2018-05-07T14:48:17.000Z (almost 7 years ago)
- Default Branch: master
- Last Pushed: 2018-06-30T00:32:02.000Z (almost 7 years ago)
- Last Synced: 2025-02-13T22:24:46.115Z (2 months ago)
- Topics: bigram-model, language-identification, language-model, linguistic-analysis, n-grams, text-classification-python, text-processing
- Language: Python
- Homepage:
- Size: 5.88 MB
- Stars: 0
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Text Language Identifier
`text-language-identifier` accurately identifies English, French and Italian written text with up to 99% accuracy.
Since this project was used to better understand `n-gram analysis`, no natural language processing modules were imported -- everything was implemented from first principles.

### Usage
0. Download this repo as a `.zip`
1. `cd src`
2. `$ python3 lang-bigram-id.py`Accuracy information for each model will be displayed in the terminal after the analysis is complete.
Diff the output files to see which lines were predicted differently by certain pairs of models.
Here are some commands to try:
```shell
$ diff ../output/letter-bigram-laplace-smoothing-predictions.txt ../output/letter-bigram-no-smoothing-predictions.txt
$ diff ../output/letter-bigram-laplace-smoothing-predictions.txt ../output/word-bigram-no-smoothing-predictions.txt
$ diff ../output/letter-bigram-laplace-smoothing-predictions.txt ../output/word-bigram-laplace-smoothing-predictions.txt
```### How does it work?
This program creates a probabilistic model of each language based on bigram analyses of French, English and Italian sample corpora. To predict the language of a test sentence, it creates another probabilistic model to represent the sentence and chooses the language whose model is most similar to the sentence model using the RMSE.
### Why?
This is was a computational linguistics experiment to see which language model, word or letter bigrams, performs the best. I also tested the impact of LaPlace smoothing on the predictive accuracy of the model.