https://github.com/worldbank/wb-nlp-tools
Natural language processing tools developed by the World Bank's DECAT unit. A suite of text preprocessing and cleaning algorithms for NLP analysis and modeling.
https://github.com/worldbank/wb-nlp-tools
gensim langdetect nlp nltk pdf2text python spacy text-mining
Last synced: about 2 months ago
JSON representation
Natural language processing tools developed by the World Bank's DECAT unit. A suite of text preprocessing and cleaning algorithms for NLP analysis and modeling.
- Host: GitHub
- URL: https://github.com/worldbank/wb-nlp-tools
- Owner: worldbank
- License: mit
- Created: 2021-03-01T13:07:00.000Z (over 4 years ago)
- Default Branch: main
- Last Pushed: 2022-06-11T05:06:40.000Z (about 3 years ago)
- Last Synced: 2025-04-03T00:51:47.412Z (2 months ago)
- Topics: gensim, langdetect, nlp, nltk, pdf2text, python, spacy, text-mining
- Language: Python
- Homepage:
- Size: 2.73 MB
- Stars: 10
- Watchers: 9
- Forks: 7
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# WB Cleaning Module
This module contains the implementation for a suite of text preprocessing and cleaning pipeline. The cleaning architecture is designed to be flexible and can be configured through config files, e.g., [`configs/cleaning/default.yml`](configs/cleaning/default.yml).
# Modules
### Document preprocessing and cleaning
Most of the raw data that we are using are in the form of PDF and text documents. We develop a suite of preprocessing and cleaning modules to handle the transformations required to generate a high quality input to our models.
An overview of the pipeline is as follows:
- Convert pdf to text
- Parse the text document and perform sentence tokenization.
- Lemmatize the tokens and remove stop words.
- Drop all non-alphabetical tokens.
- Apply spell check and try to recover misspelled words.
- Normalize tokens by converting to lowercase.### Phrase detection
Part of the preprocessing is also the inference of phrases in the documents. Phrases are logical grouping of tokens that represent an intrinsic meaning.
We are primarily leveraging the [Gensim](https://radimrehurek.com/gensim/) NLP toolkit and Spacy to develop the phrase detection algorithms.
### Acronym detection
Acronyms are fairly common in documents from development organizations and multilateral development banks. In this project, we include in our pipeline an acronym detector and expander. The idea is to detect acronyms in a document and replace all of the acronyms with the appropriate expansion.
We also keep track of multiple instances of an acronym and generate prototypes for each that encodes the information of the acronym, e.g., PPP -> private-public partnership or purchasing power parity.