Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/sinaahmadi/klpt
The Kurdish Language Processing Toolkit
https://github.com/sinaahmadi/klpt
kurdish kurdish-language-processing kurdish-oss kurdish-stemming kurdish-tokenization language-technology less-resource-languages natural-language-processing toolkit
Last synced: 28 days ago
JSON representation
The Kurdish Language Processing Toolkit
- Host: GitHub
- URL: https://github.com/sinaahmadi/klpt
- Owner: sinaahmadi
- License: other
- Created: 2020-11-07T04:22:26.000Z (about 4 years ago)
- Default Branch: master
- Last Pushed: 2024-09-18T14:53:14.000Z (3 months ago)
- Last Synced: 2024-11-08T15:50:49.074Z (about 1 month ago)
- Topics: kurdish, kurdish-language-processing, kurdish-oss, kurdish-stemming, kurdish-tokenization, language-technology, less-resource-languages, natural-language-processing, toolkit
- Language: Python
- Homepage: https://sinaahmadi.github.io/klpt/
- Size: 5.62 MB
- Stars: 93
- Watchers: 9
- Forks: 14
- Open Issues: 5
-
Metadata Files:
- Readme: README.md
- Funding: .github/FUNDING.yml
- License: LICENSE
Awesome Lists containing this project
- awesome-kurdish - Kurdish Language Processing Toolkit
README
# Kurdish Language Processing Toolkit
### Welcome / *Hûn bi xêr hatin* / بە خێر بێن! 🙂
Kurdish Language Processing Toolkit--KLPT is a [natural language processing](https://en.wikipedia.org/wiki/Natural_language_processing) (NLP) toolkit in Python for the [Kurdish language](https://en.wikipedia.org/wiki/Kurdish_languages). The current version comes with four core modules, namely `preprocess`, `stem`, `transliterate` and `tokenize` and addresses basic language processing tasks such as text preprocessing, stemming, tokenization, spell-checking and morphological analysis for the [Sorani](https://en.wikipedia.org/wiki/Sorani) and the [Kurmanji](https://en.wikipedia.org/wiki/Kurmanji) dialects of Kurdish.
---
#### Latest update on April 29th, 2022 🎉In the latest version, I focused on Kurmanji for which a morphological analyzer, a stemmer and a lemmatizer are now added to the toolkit. These tasks were partially addressed previously using the [Apertium project](https://github.com/apertium/apertium-kmr). Now, that is fully replaced by the Kurmanji implementation of [Kurdish Hunspell](https://github.com/sinaahmadi/KurdishHunspell).
---
## Install KLPT
- **Operating system**: macOS / OS X · Linux · Windows (Cygwin, MinGW, Visual
Studio)
- **Python version**: Python 3.5+
- **Package managers**: [pip](https://pypi.org/project/klpt/)[pip]: https://pypi.org/project/spacy/
[conda]: https://anaconda.org/conda-forge/spacy### pip
Using pip, KLPT releases are available as source packages and binary wheels. Please make sure that a compatible Python version is installed:
```bash
pip install klpt
```All the data files including lexicons and morphological rules are also installed with the package.
Although KLPT is not dependent on any NLP toolkit, there is one important requirement, particularly for the `stem` module. That is [`cyhunspell`](https://pypi.org/project/cyhunspell/) which should be installed with a version >= 2.0.1.
### About this version
Please note that KLPT is under development and some of the functionalities will appear in the future versions. You can find out regarding the progress of each task at the [Projects](https://github.com/sinaahmadi/KLPT/projects) section. In the current version, the following tasks are included:
Modules
Tasks
Sorani (ckb)
Kurmanji (kmr)
preprocess
normalization
✓ (v0.1.0)
✓ (v0.1.0)
standardization
✓ (v0.1.0)
✓ (v0.1.0)
unification of numerals
✓ (v0.1.0)
✓ (v0.1.0)
stopwords
✓ (v0.1.4)
✓ (v0.1.4)
tokenize
word tokenization
✓ (v0.1.0)
✓ (v0.1.0)
MWE tokenization
✓ (v0.1.0)
✓ (v0.1.0)
sentence tokenization
✓ (v0.1.0)
✓ (v0.1.0)
part-of-speech tagging
✗
✗
transliterate
Arabic to Latin
✓ (v0.1.0)
✓ (v0.1.0)
Latin to Arabic
✓ (v0.1.0)
✓ (v0.1.0)
Detection of u/w and î/y
✓ (v0.1.0)
✓ (v0.1.0)
Detection of Bizroke ( i )
✗
✗
stem
morphological analysis
✓ (v0.1.0)
✓ (v0.1.1)
stemming
✓ (v.0.1.5)
✓ (v.0.1.6) 🆕
lemmatization
✓ (v.0.1.5)
✓ (v.0.1.6) 🆕
spell error detection and correction
✓ (v0.1.0)
✓ (v.0.1.6) 🆕
## Basic usage
Once the package is installed, you can import the toolkit as follows:
```python
import klpt
```In the following, a few examples are provided to work with various modules. Almost all the classes have three arguments in common:
- `dialect`: the name of the dialect as `Sorani` or `Kurmanji` (ISO 639-3 code will be also added)
- `script`: the script of your input text as "Arabic" or "Latin"
- `numeral`: the type of the numerals as
- Arabic [١٢٣٤٥٦٧٨٩٠]
- Farsi [۱۲۳۴۵۶۷۸۹۰]
- Latin [1234567890]### Preprocess
This module deals with normalizing scripts and orthographies by using writing conventions based on dialects and scripts. The goal is not to correct the orthography but to normalize the text in terms of the encoding and common writing rules. The input encoding should be in UTF-8 only. To this end, three functions are provided as follows:
- `normalize`: deals with different encodings and unifies characters based on dialects and scripts
- `standardize`: given a normalized text, it returns standardized text based on the Kurdish orthographies following recommendations for [Kurmanji](https://books.google.ie/books?id=Z7lDnwEACAAJ) and [Sorani](http://yageyziman.com/Renusi_Kurdi.htm)
- `unify_numerals`: conversion of the various types of numerals used in Kurdish texts
- `preprocess`: one single function for normalization, standardization and unification of numeralsIt is recommended that the output of this module be used as the input of subsequent tasks in an NLP pipeline.
```python
>>> from klpt.preprocess import Preprocess>>> preprocessor_ckb = Preprocess("Sorani", "Arabic", numeral="Latin")
>>> preprocessor_ckb.normalize("لە ســـاڵەکانی ١٩٥٠دا")
'لە ساڵەکانی 1950دا'
>>> preprocessor_ckb.standardize("راستە لەو ووڵاتەدا")
'ڕاستە لەو وڵاتەدا'
>>> preprocessor_ckb.unify_numerals("٢٠٢٠")
'2020'
>>> preprocessor_ckb.preprocess("راستە لە ووڵاتەی ٢٣هەمدا")
'ڕاستە لە وڵاتەی 23هەمدا'>>> preprocessor_kmr = Preprocess("Kurmanji", "Latin")
>>> preprocessor_kmr.standardize("di sala 2018-an")
'di sala 2018an'
>>> preprocessor_kmr.standardize("hêviya")
'hêvîya'
```In addition, it is possible to remove Kurdish [stopwords](https://en.wikipedia.org/wiki/Stop_word) using the `stopwords` variable. You can define a function like the following to do so:
```python
from klpt.preprocess import Preprocessdef remove_stopwords(text, dialect, script):
p = Preprocess(dialect, script)
return [token for token in text.split() if token not in p.stopwords]
```### Tokenization
This module focuses on the tokenization of both Kurmanji and Sorani dialects of Kurdish with the following functions:
- `word_tokenize`: tokenization of texts into tokens (both [multi-word expressions](https://aclweb.org/aclwiki/Multiword_Expressions) and single-word tokens).
- `mwe_tokenize`: tokenization of texts by only taking compound forms into account
- `sent_tokenize`: tokenization of texts into sentencesThis module is based on the [Kurdish tokenization project](https://github.com/sinaahmadi/KurdishTokenization). It is recommended that the output of this module be used as the input of the `Stem` module.
```python
>>> from klpt.tokenize import Tokenize>>> tokenizer = Tokenize("Kurmanji", "Latin")
>>> tokenizer.word_tokenize("ji bo fortê xwe avêtin")
['▁ji▁', 'bo', '▁▁fortê‒xwe‒avêtin▁▁']
>>> tokenizer.mwe_tokenize("bi serokê hukûmeta herêma Kurdistanê Prof. Salih re saz kir.")
'bi serokê hukûmeta herêma Kurdistanê Prof . Salih re saz kir .'>>> tokenizer_ckb = Tokenize("Sorani", "Arabic")
>>> tokenizer_ckb.word_tokenize("بە هەموو هەمووانەوە ڕێک کەوتن")
['▁بە▁', '▁هەموو▁', 'هەمووانەوە', '▁▁ڕێک‒کەوتن▁▁']
```### Transliteration
This module aims at transliterating one script of Kurdish into another one. Currently, only the Latin-based and the Arabic-based scripts of Sorani and Kurmanji are supported. The main function in this module is `transliterate()` which also takes care of detecting the correct form of double-usage graphemes, namely و ↔ w/u and ی ↔ î/y. In some specific occasions, it can also predict the placement of the missing *i* (also known as *Bizroke/بزرۆکە*).
The module is based on the [Kurdish transliteration project](https://github.com/sinaahmadi/wergor).
```python
>>> from klpt.transliterate import Transliterate
>>> transliterate = Transliterate("Kurmanji", "Latin", target_script="Arabic")
>>> transliterate.transliterate("rojhilata navîn")
'رۆژهلاتا ناڤین'>>> transliterate_ckb = Transliterate("Sorani", "Arabic", target_script="Latin")
>>> transliterate_ckb.transliterate("لە وڵاتەکانی دیکەدا")
'le wiłatekanî dîkeda'
```### Stem
The Stem module deals with various tasks, mainly through the following functions:
- `check_spelling`: spell error detection
- `correct_spelling`: spell error correction
- `analyze`: morphological analysis
- `stem`: stemming, e.g. "بڕ" → "بڕاوە" or "dixwî" → "xw"
- `lemmatize`: lemmatization, e.g. "بردن" → "بردمنەوە" or "jimartibûye" → "hejmartin"The module is based on the [Kurdish Hunspell project](https://github.com/sinaahmadi/KurdishHunspell) for Sorani and the [Apertium project](https://github.com/apertium/apertium-kmr) for Kurmanji. Please note that this module is currently getting further completed and we are aware of its current shortcomings.
```python
>>> from klpt.stem import Stem
>>> stemmer = Stem("Sorani", "Arabic")
>>> stemmer.check_spelling("سوتاندبووت")
False
>>> stemmer.correct_spelling("سوتاندبووت")
(False, ['ستاندبووت', 'سووتاندبووت', 'سووڕاندبووت', 'ڕووتاندبووت', 'فەوتاندبووت', 'بووژاندبووت'])
>>> stemmer.analyze("دیتبامن")
[{'pos': ['verb'], 'description': 'past_stem_transitive_active', 'stem': 'دی', 'lemma': ['دیتن'], 'base': 'دیت', 'prefixes': '', 'suffixes': 'بامن'}]
>>> stemmer.stem("دەچینەوە")
['چ']
>>> stemmer.stem("گورەکە", mark_unknown=True) # گوڵەکە in Hewlêrî dialect
['_گور_']
>>> stemmer.lemmatize("گوڵەکانم")
['گوڵ', 'گوڵە']>>> stemmer = Stem("Kurmanji", "Latin")
>>> stemmer.analyze("dibêjim")
[{'base': 'bêj', 'prefixes': 'di', 'suffixes': 'im', 'pos': ['verb'], 'description': 'present_stem_transitive_active', 'stem': 'bêj', 'lemma': ['gotin']}]
>>> stemmer.stem("dixwî")
['xw']
>>> stemmer.lemmatize("jimartibûye")
['hejmartin']
```📖 **Please note that a more complete documentation of the toolkit is available at [https://sinaahmadi.github.io/klpt/](https://sinaahmadi.github.io/klpt/)**.
## Become a sponsor
Please consider donating to the project. Data annotation and resource creation requires tremendous amount of time and linguistic expertise. Even a trivial donation will make a difference. You can do so by [becoming a sponsor](https://github.com/sponsors/sinaahmadi) to accompany me in this journey and help the Kurdish language have a better place within other natural languages on the Web. Depending on your support,
- You can be an official sponsor
- You will get a GitHub sponsor badge on your profile
- If you have any questions, I will focus on it
- If you want, I will add your name or company logo on the front page of your preferred project
- Your contribution will be acknowledged in one of my future papers in a field of your choice*And, thanks for those who have already sponsored this project. More significant achievements will be made thanks to you!*
## Contribute
Are you interested in this project? Each task is addressed individually. Please check the following repositories to find which one you are more interested in:
- [Kurdish tokenization](https://github.com/sinaahmadi/KurdishTokenization)
- [Kurdish Hunspell](https://github.com/sinaahmadi/KurdishHunspell)
- [Kurdish transliteration](https://github.com/sinaahmadi/wergor)In addition, our main objective is to extend the current toolkit to include more tasks, particularly part-of-speech tagging, named-entity recognition and syntactic analysis. Further instructions are provided at [https://sinaahmadi.github.io/klpt/about/contributing/](https://sinaahmadi.github.io/klpt/about/contributing/). You can also join us on [Gitter](https://gitter.im/KurdishNLP/community).
Don't forget, **open-source is fun!** 😊
## Requirements
- Python >=3.6
- [`cyhunspell`](https://pypi.org/project/cyhunspell/) >= 2.0.1## Cite this project
Please consider citing [this paper](https://sinaahmadi.github.io/docs/articles/ahmadi2020klpt.pdf), if you use any part of the data or the tool ([`bib` file](https://sinaahmadi.github.io/bibliography/ahmadi2020klpt.txt)):@inproceedings{ahmadi2020klpt,
title = "{KLPT--Kurdish Language Processing Toolkit}",
author = "Ahmadi, Sina",
booktitle = "Proceedings of the second Workshop for {NLP} Open Source Software ({NLP}-{OSS})",
month = nov,
year = "2020",
publisher = "Association for Computational Linguistics"
}## License
Kurdish Language Processing Toolkit by Sina Ahmadi is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License which means:- **You are free to share**, copy and redistribute the material in any medium or format and also adapt, remix, transform, and build upon the material
for any purpose, **even commercially**.
- **You must give appropriate credit**, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
- If you remix, transform, or build upon the material, **you must distribute your contributions under the same license as the original**.