https://github.com/huggingface/tokenizers
💥 Fast State-of-the-Art Tokenizers optimized for Research and Production
https://github.com/huggingface/tokenizers
bert gpt language-model natural-language-processing natural-language-understanding nlp transformers
Last synced: 22 days ago
JSON representation
💥 Fast State-of-the-Art Tokenizers optimized for Research and Production
- Host: GitHub
- URL: https://github.com/huggingface/tokenizers
- Owner: huggingface
- License: apache-2.0
- Created: 2019-11-01T17:52:20.000Z (over 5 years ago)
- Default Branch: main
- Last Pushed: 2025-03-18T16:33:44.000Z (about 1 month ago)
- Last Synced: 2025-03-25T07:38:46.614Z (24 days ago)
- Topics: bert, gpt, language-model, natural-language-processing, natural-language-understanding, nlp, transformers
- Language: Rust
- Homepage: https://huggingface.co/docs/tokenizers
- Size: 10.2 MB
- Stars: 9,520
- Watchers: 121
- Forks: 867
- Open Issues: 80
-
Metadata Files:
- Readme: README.md
- License: LICENSE
- Citation: CITATION.cff
Awesome Lists containing this project
- awesome-python-machine-learning-resources - GitHub - 30% open · ⏱️ 25.08.2022): (文本数据和NLP)
- awesome-rust - huggingface/tokenizers - Hugging Face's tokenizers for modern NLP pipelines (original implementation) with bindings for Python. [](https://github.com/huggingface/tokenizers/actions) (Libraries / Artificial Intelligence)
- awesome-rust-cn - huggingface/tokenizers - hug Face为现代NLP管道编写的标记器,使用Rust( (库 Libraries / 人工智能 Artificial Intelligence)
- awesome-huggingface - tokenizers - Fast state-of-the-Art tokenizers optimized for research and production. (🤗 Official Libraries)
- awesome-list - HuggingFace Tokenizers - A high-performance library for text vocabularies and tokenizers. (Natural Language Processing / General Purpose NLP)
- awesome-tokenizers - huggingface/tokenizers
- awesome-ARTificial - tokenizers
- StarryDivineSky - huggingface/tokenizers
- bert-in-production - huggingface/tokenizers - of-the-Art Tokenizers optimized for Research and Production (Implementations)
- awesome-rust - huggingface/tokenizers - Hugging Face's tokenizers for modern NLP pipelines (original implementation) with bindings for Python. [](https://github.com/huggingface/tokenizers/actions) (Libraries / Artificial Intelligence)
- project-awesome - huggingface/tokenizers - 💥 Fast State-of-the-Art Tokenizers optimized for Research and Production (Rust)
- awesome-yolo-object-detection - Tokenizers - of-the-Art Tokenizers optimized for Research and Production. [huggingface.co/docs/tokenizers](https://huggingface.co/docs/tokenizers/index) (Other Versions of YOLO)
- awesome-yolo-object-detection - Tokenizers - of-the-Art Tokenizers optimized for Research and Production. [huggingface.co/docs/tokenizers](https://huggingface.co/docs/tokenizers/index) (Other Versions of YOLO)
- awesome-llm-and-aigc - Tokenizers - of-the-Art Tokenizers optimized for Research and Production. [huggingface.co/docs/tokenizers](https://huggingface.co/docs/tokenizers/index) (Summary)
- awesome-llm-and-aigc - Tokenizers - of-the-Art Tokenizers optimized for Research and Production. [huggingface.co/docs/tokenizers](https://huggingface.co/docs/tokenizers/index) (Summary)
- awesome-llmops - tokenizers - of-the-Art Tokenizers optimized for Research and Production |  | (Serving / Large Model Serving)
- awesome-cuda-and-hpc - Tokenizers - of-the-Art Tokenizers optimized for Research and Production. [huggingface.co/docs/tokenizers](https://huggingface.co/docs/tokenizers/index) (Frameworks)
- awesome-cuda-and-hpc - Tokenizers - of-the-Art Tokenizers optimized for Research and Production. [huggingface.co/docs/tokenizers](https://huggingface.co/docs/tokenizers/index) (Frameworks)
- awesome-rust-list - Tokenizers - of-the-Art Tokenizers optimized for Research and Production. [huggingface.co/docs/tokenizers](https://huggingface.co/docs/tokenizers/index) (Machine Learning)
- awesome-rust-list - Tokenizers - of-the-Art Tokenizers optimized for Research and Production. [huggingface.co/docs/tokenizers](https://huggingface.co/docs/tokenizers/index) (Machine Learning)
- fucking-awesome-rust - huggingface/tokenizers - Hugging Face's tokenizers for modern NLP pipelines (original implementation) with bindings for Python. [](https://github.com/huggingface/tokenizers/actions) (Libraries / Artificial Intelligence)
README
![]()
Provides an implementation of today's most used tokenizers, with a focus on performance and
versatility.## Main features:
- Train new vocabularies and tokenize, using today's most used tokenizers.
- Extremely fast (both training and tokenization), thanks to the Rust implementation. Takes
less than 20 seconds to tokenize a GB of text on a server's CPU.
- Easy to use, but also extremely versatile.
- Designed for research and production.
- Normalization comes with alignments tracking. It's always possible to get the part of the
original sentence that corresponds to a given token.
- Does all the pre-processing: Truncate, Pad, add the special tokens your model needs.## Performances
Performances can vary depending on hardware, but running the [~/bindings/python/benches/test_tiktoken.py](bindings/python/benches/test_tiktoken.py) should give the following on a g6 aws instance:
## Bindings
We provide bindings to the following languages (more to come!):
- [Rust](https://github.com/huggingface/tokenizers/tree/main/tokenizers) (Original implementation)
- [Python](https://github.com/huggingface/tokenizers/tree/main/bindings/python)
- [Node.js](https://github.com/huggingface/tokenizers/tree/main/bindings/node)
- [Ruby](https://github.com/ankane/tokenizers-ruby) (Contributed by @ankane, external repo)## Installation
You can install from source using:
```bash
pip install git+https://github.com/huggingface/tokenizers.git#subdirectory=bindings/python
```our install the released versions with
```bash
pip install tokenizers
```
## Quick example using Python:Choose your model between Byte-Pair Encoding, WordPiece or Unigram and instantiate a tokenizer:
```python
from tokenizers import Tokenizer
from tokenizers.models import BPEtokenizer = Tokenizer(BPE())
```You can customize how pre-tokenization (e.g., splitting into words) is done:
```python
from tokenizers.pre_tokenizers import Whitespacetokenizer.pre_tokenizer = Whitespace()
```Then training your tokenizer on a set of files just takes two lines of codes:
```python
from tokenizers.trainers import BpeTrainertrainer = BpeTrainer(special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"])
tokenizer.train(files=["wiki.train.raw", "wiki.valid.raw", "wiki.test.raw"], trainer=trainer)
```Once your tokenizer is trained, encode any text with just one line:
```python
output = tokenizer.encode("Hello, y'all! How are you 😁 ?")
print(output.tokens)
# ["Hello", ",", "y", "'", "all", "!", "How", "are", "you", "[UNK]", "?"]
```Check the [documentation](https://huggingface.co/docs/tokenizers/index)
or the [quicktour](https://huggingface.co/docs/tokenizers/quicktour) to learn more!