Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/huggingface/tokenizers
💥 Fast State-of-the-Art Tokenizers optimized for Research and Production
https://github.com/huggingface/tokenizers
bert gpt language-model natural-language-processing natural-language-understanding nlp transformers
Last synced: 2 months ago
JSON representation
💥 Fast State-of-the-Art Tokenizers optimized for Research and Production
- Host: GitHub
- URL: https://github.com/huggingface/tokenizers
- Owner: huggingface
- License: apache-2.0
- Created: 2019-11-01T17:52:20.000Z (about 5 years ago)
- Default Branch: main
- Last Pushed: 2024-10-25T13:44:30.000Z (3 months ago)
- Last Synced: 2024-10-28T12:21:27.431Z (3 months ago)
- Topics: bert, gpt, language-model, natural-language-processing, natural-language-understanding, nlp, transformers
- Language: Rust
- Homepage: https://huggingface.co/docs/tokenizers
- Size: 9.73 MB
- Stars: 9,012
- Watchers: 119
- Forks: 795
- Open Issues: 58
-
Metadata Files:
- Readme: README.md
- License: LICENSE
- Citation: CITATION.cff
Awesome Lists containing this project
- awesome-python-machine-learning-resources - GitHub - 30% open · ⏱️ 25.08.2022): (文本数据和NLP)
- awesome-rust - huggingface/tokenizers - Hugging Face's tokenizers for modern NLP pipelines (original implementation) with bindings for Python. [![Build Status](https://github.com/huggingface/tokenizers/workflows/Rust/badge.svg?branch=master)](https://github.com/huggingface/tokenizers/actions) (Libraries / Artificial Intelligence)
- awesome-rust-cn - huggingface/tokenizers - hug Face为现代NLP管道编写的标记器,使用Rust( (库 Libraries / 人工智能 Artificial Intelligence)
- awesome-huggingface - tokenizers - Fast state-of-the-Art tokenizers optimized for research and production. (🤗 Official Libraries)
- awesome-list - HuggingFace Tokenizers - A high-performance library for text vocabularies and tokenizers. (Natural Language Processing / General Purpose NLP)
- awesome-ARTificial - tokenizers
- StarryDivineSky - huggingface/tokenizers
- bert-in-production - huggingface/tokenizers - of-the-Art Tokenizers optimized for Research and Production (Implementations)
- awesome-rust - huggingface/tokenizers - Hugging Face's tokenizers for modern NLP pipelines (original implementation) with bindings for Python. [![Build Status](https://github.com/huggingface/tokenizers/workflows/Rust/badge.svg?branch=master)](https://github.com/huggingface/tokenizers/actions) (Libraries / Artificial Intelligence)
- project-awesome - huggingface/tokenizers - 💥 Fast State-of-the-Art Tokenizers optimized for Research and Production (Rust)
- awesome-yolo-object-detection - Tokenizers - of-the-Art Tokenizers optimized for Research and Production. [huggingface.co/docs/tokenizers](https://huggingface.co/docs/tokenizers/index) (Other Versions of YOLO)
- awesome-yolo-object-detection - Tokenizers - of-the-Art Tokenizers optimized for Research and Production. [huggingface.co/docs/tokenizers](https://huggingface.co/docs/tokenizers/index) (Other Versions of YOLO)
- awesome-llm-and-aigc - Tokenizers - of-the-Art Tokenizers optimized for Research and Production. [huggingface.co/docs/tokenizers](https://huggingface.co/docs/tokenizers/index) (Summary)
- awesome-llm-and-aigc - Tokenizers - of-the-Art Tokenizers optimized for Research and Production. [huggingface.co/docs/tokenizers](https://huggingface.co/docs/tokenizers/index) (Summary)
- awesome-llmops - tokenizers - of-the-Art Tokenizers optimized for Research and Production | ![GitHub Badge](https://img.shields.io/github/stars/huggingface/tokenizers.svg?style=flat-square) | (Serving / Large Model Serving)
- awesome-cuda-triton-hpc - Tokenizers - of-the-Art Tokenizers optimized for Research and Production. [huggingface.co/docs/tokenizers](https://huggingface.co/docs/tokenizers/index) (Frameworks)
- awesome-cuda-triton-hpc - Tokenizers - of-the-Art Tokenizers optimized for Research and Production. [huggingface.co/docs/tokenizers](https://huggingface.co/docs/tokenizers/index) (Frameworks)
- awesome-rust-list - Tokenizers - of-the-Art Tokenizers optimized for Research and Production. [huggingface.co/docs/tokenizers](https://huggingface.co/docs/tokenizers/index) (Machine Learning)
- awesome-rust-list - Tokenizers - of-the-Art Tokenizers optimized for Research and Production. [huggingface.co/docs/tokenizers](https://huggingface.co/docs/tokenizers/index) (Machine Learning)
- fucking-awesome-rust - huggingface/tokenizers - Hugging Face's tokenizers for modern NLP pipelines (original implementation) with bindings for Python. [![Build Status](https://github.com/huggingface/tokenizers/workflows/Rust/badge.svg?branch=master)](https://github.com/huggingface/tokenizers/actions) (Libraries / Artificial Intelligence)
README
Provides an implementation of today's most used tokenizers, with a focus on performance and
versatility.## Main features:
- Train new vocabularies and tokenize, using today's most used tokenizers.
- Extremely fast (both training and tokenization), thanks to the Rust implementation. Takes
less than 20 seconds to tokenize a GB of text on a server's CPU.
- Easy to use, but also extremely versatile.
- Designed for research and production.
- Normalization comes with alignments tracking. It's always possible to get the part of the
original sentence that corresponds to a given token.
- Does all the pre-processing: Truncate, Pad, add the special tokens your model needs.## Performances
Performances can vary depending on hardware, but running the [~/bindings/python/benches/test_tiktoken.py](bindings/python/benches/test_tiktoken.py) should give the following on a g6 aws instance:
![image](https://github.com/user-attachments/assets/2b913d4b-e488-4cbc-b542-f90a6c40643d)## Bindings
We provide bindings to the following languages (more to come!):
- [Rust](https://github.com/huggingface/tokenizers/tree/main/tokenizers) (Original implementation)
- [Python](https://github.com/huggingface/tokenizers/tree/main/bindings/python)
- [Node.js](https://github.com/huggingface/tokenizers/tree/main/bindings/node)
- [Ruby](https://github.com/ankane/tokenizers-ruby) (Contributed by @ankane, external repo)
## Quick example using Python:Choose your model between Byte-Pair Encoding, WordPiece or Unigram and instantiate a tokenizer:
```python
from tokenizers import Tokenizer
from tokenizers.models import BPEtokenizer = Tokenizer(BPE())
```You can customize how pre-tokenization (e.g., splitting into words) is done:
```python
from tokenizers.pre_tokenizers import Whitespacetokenizer.pre_tokenizer = Whitespace()
```Then training your tokenizer on a set of files just takes two lines of codes:
```python
from tokenizers.trainers import BpeTrainertrainer = BpeTrainer(special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"])
tokenizer.train(files=["wiki.train.raw", "wiki.valid.raw", "wiki.test.raw"], trainer=trainer)
```Once your tokenizer is trained, encode any text with just one line:
```python
output = tokenizer.encode("Hello, y'all! How are you 😁 ?")
print(output.tokens)
# ["Hello", ",", "y", "'", "all", "!", "How", "are", "you", "[UNK]", "?"]
```Check the [documentation](https://huggingface.co/docs/tokenizers/index)
or the [quicktour](https://huggingface.co/docs/tokenizers/quicktour) to learn more!