An open API service indexing awesome lists of open source software.

https://github.com/jfilter/clean-text

🧹 Python package for text cleaning
https://github.com/jfilter/clean-text

natural-language-processing nlp python python-package scraping text-cleaning text-normalization text-preprocessing user-generated-content

Last synced: 14 days ago
JSON representation

🧹 Python package for text cleaning

Awesome Lists containing this project

README

          

# `clean-text` [![Build Status](https://img.shields.io/github/actions/workflow/status/jfilter/clean-text/test.yml)](https://github.com/jfilter/clean-text/actions/workflows/test.yml) [![PyPI](https://img.shields.io/pypi/v/clean-text.svg)](https://pypi.org/project/clean-text/) [![PyPI - Python Version](https://img.shields.io/pypi/pyversions/clean-text.svg)](https://pypi.org/project/clean-text/) [![PyPI - Downloads](https://img.shields.io/pypi/dm/clean-text)](https://pypistats.org/packages/clean-text)

User-generated content on the Web and in social media is often dirty. Preprocess your scraped data with `clean-text` to create a normalized text representation. For instance, turn this corrupted input:

```txt
A bunch of \\u2018new\\u2019 references, including [Moana](https://en.wikipedia.org/wiki/Moana_%282016_film%29).

»Yóù àré rïght <3!«
```

into this clean output:

```txt
A bunch of 'new' references, including [moana]().

"you are right <3!"
```

`clean-text` uses [ftfy](https://github.com/LuminosoInsight/python-ftfy), [unidecode](https://github.com/takluyver/Unidecode) and numerous hand-crafted rules, i.e., RegEx.

## Installation

To install the GPL-licensed package [unidecode](https://github.com/takluyver/Unidecode) alongside:

```bash
pip install clean-text[gpl]
```

You may want to abstain from GPL:

```bash
pip install clean-text
```

NB: This package is named `clean-text` and not `cleantext`.

If [unidecode](https://github.com/takluyver/Unidecode) is not available, `clean-text` will resort to Python's [unicodedata.normalize](https://docs.python.org/3.7/library/unicodedata.html#unicodedata.normalize) for [transliteration](https://en.wikipedia.org/wiki/Transliteration).
Transliteration to closest ASCII symbols involes manually mappings, i.e., `ê` to `e`.
`unidecode`'s mapping is superiour but unicodedata's are sufficent.
However, you may want to disable this feature altogether depending on your data and use case.

To make it clear: There are **inconsistencies** between processing text with or without `unidecode`.

## Usage

```python
from cleantext import clean

clean("some input",
fix_unicode=True, # fix various unicode errors
to_ascii=True, # transliterate to closest ASCII representation
lower=True, # lowercase text
no_line_breaks=False, # fully strip line breaks as opposed to only normalizing them
no_code=False, # replace all code snippets with a special token
no_urls=False, # replace all URLs with a special token
no_emails=False, # replace all email addresses with a special token
no_phone_numbers=False, # replace all phone numbers with a special token
no_ip_addresses=False, # replace all IP addresses with a special token
no_file_paths=False, # replace all file paths with a special token
no_numbers=False, # replace all numbers with a special token
no_digits=False, # replace all digits with a special token
no_currency_symbols=False, # replace all currency symbols with a special token
no_punct=False, # remove punctuations
replace_with_punct="", # instead of removing punctuations you may replace them
exceptions=None, # list of regex patterns to preserve verbatim
replace_with_code="",
replace_with_url="",
replace_with_email="",
replace_with_phone_number="",
replace_with_ip_address="",
replace_with_file_path="",
replace_with_number="",
replace_with_digit="0",
replace_with_currency_symbol="",
lang="en" # set to 'de' for German special handling
)
```

Carefully choose the arguments that fit your task. The default parameters are listed above.

### Preserving patterns with exceptions

Use `exceptions` to protect specific text patterns from being modified during cleaning.
Each entry is a regex pattern string; all matches are preserved **verbatim** (not lowered, not
transliterated — exactly as they appeared in the input).

```python
from cleantext import clean

# Preserve a literal compound word while removing other punctuation
clean("drive-thru and text---cleaning", no_punct=True, exceptions=["drive-thru"])
# => 'drive-thru and textcleaning'

# Preserve all hyphenated compound words using a regex
clean("drive-thru and pick-up", no_punct=True, exceptions=[r"\w+-\w+"])
# => 'drive-thru and pick-up'

# Multiple exception patterns
clean("drive-thru costs $5", no_punct=True, no_currency_symbols=True,
exceptions=[r"\w+-\w+", r"\$\d+"])
# => 'drive-thru costs $5'
```

You may also only use specific functions for cleaning. For this, take a look at the [source code](https://github.com/jfilter/clean-text/blob/main/cleantext/clean.py).

### Cleaning multiple texts in parallel

Use `clean_texts()` to clean a list of strings. Set `n_jobs` to enable parallel processing via Python's built-in `multiprocessing`:

```python
from cleantext import clean_texts

# Sequential (default) — no multiprocessing overhead
clean_texts(["text one", "text two", "text three"])

# Use all available CPU cores
clean_texts(["text one", "text two", "text three"], n_jobs=-1)

# Use a specific number of workers
clean_texts(["text one", "text two", "text three"], n_jobs=4)

# All clean() keyword arguments are supported
clean_texts(texts, n_jobs=-1, no_urls=True, lang="de", lower=False)
```

`n_jobs` semantics:
- `1` or `None` — sequential processing (default, zero overhead)
- `-1` — use all available CPU cores
- `-2` — use all cores except one, etc.
- Any positive integer — use exactly that many workers
- `0` — raises `ValueError`

### Supported languages

So far, only English and German are fully supported.
It should work for the majority of western languages.
If you need some special handling for your language, feel free to contribute. 🙃

### Using `clean-text` with `scikit-learn`

There is also **scikit-learn** compatible API to use in your pipelines.
All of the parameters above work here as well.

```bash
pip install clean-text[gpl,sklearn]
pip install clean-text[sklearn]
```

```python
from cleantext.sklearn import CleanTransformer

cleaner = CleanTransformer(no_punct=False, lower=False)

cleaner.transform(['Happily clean your text!', 'Another Input'])
```

## Development

[Use poetry.](https://python-poetry.org/)

See [RELEASING.md](RELEASING.md) for how to publish a new version.

## Contributing

If you have a **question**, found a **bug** or want to propose a new **feature**, have a look at the [issues page](https://github.com/jfilter/clean-text/issues).

**Pull requests** are especially welcomed when they fix bugs or improve the code quality.

If you don't like the output of `clean-text`, consider adding a [test](https://github.com/jfilter/clean-text/tree/main/tests) with your specific input and desired output.

## Related Work

### Generic text cleaning packages

- https://github.com/pudo/normality
- https://github.com/davidmogar/cucco
- https://github.com/lyeoni/prenlp
- https://github.com/s/preprocessor
- https://github.com/artefactory/NLPretext
- https://github.com/cbaziotis/ekphrasis

### Full-blown NLP libraries with some text cleaning

- https://github.com/chartbeat-labs/textacy
- https://github.com/jbesomi/texthero

### Remove or replace strings

- https://github.com/vi3k6i5/flashtext
- https://github.com/ddelange/retrie

### Detect dates

- https://github.com/scrapinghub/dateparser

### Clean massive Common Crawl data

- https://github.com/facebookresearch/cc_net

## Acknowledgements

Built upon the work by [Burton DeWilde](https://github.com/bdewilde) for [Textacy](https://github.com/chartbeat-labs/textacy).

## License

Apache