https://github.com/cahya-wirawan/rwkv-tokenizer
A fast RWKV Tokenizer written in Rust
https://github.com/cahya-wirawan/rwkv-tokenizer
huggingface llm rwkv tiktoken tokenizer trie
Last synced: 6 months ago
JSON representation
A fast RWKV Tokenizer written in Rust
- Host: GitHub
- URL: https://github.com/cahya-wirawan/rwkv-tokenizer
- Owner: cahya-wirawan
- License: apache-2.0
- Created: 2024-05-29T13:33:31.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2025-03-31T18:47:50.000Z (6 months ago)
- Last Synced: 2025-04-07T18:02:45.093Z (6 months ago)
- Topics: huggingface, llm, rwkv, tiktoken, tokenizer, trie
- Language: Jupyter Notebook
- Homepage: https://github.com/cahya-wirawan/rwkv-tokenizer
- Size: 1.9 MB
- Stars: 44
- Watchers: 2
- Forks: 3
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE.txt
Awesome Lists containing this project
README
# RWKV Tokenizer
[](https://github.com/cahya-wirawan/rwkv-tokenizer/actions/)
[](https://pypi.org/project/pyrwkv-tokenizer/)
[](https://pypi.org/project/pyrwkv-tokenizer/)
[](https://crates.io/crates/rwkv-tokenizer)
[](https://crates.io/crates/rwkv-tokenizer)
[](https://github.com/cahya-wirawan/rwkv-tokenizer/blob/main/LICENSE.txt)A fast RWKV Tokenizer written in Rust that supports the World Tokenizer used by the
[RWKV](https://github.com/BlinkDL/RWKV-LM) v5 and v6 models.## Installation
Install the rwkv-tokenizer python module:
```
$ pip install pyrwkv-tokenizer
```
## Usage
```
>>> import pyrwkv_tokenizer
>>> tokenizer = pyrwkv_tokenizer.RWKVTokenizer()
>>> tokenizer.encode("Today is a beautiful day. 今天是美好的一天。")
[33520, 4600, 332, 59219, 21509, 47, 33, 10381, 11639, 13091, 15597, 11685, 14734, 10250, 11639, 10080]
>>> tokenizer.decode([33520, 4600, 332, 59219, 21509, 47, 33, 10381, 11639, 13091, 15597, 11685, 14734, 10250, 11639, 10080])
'Today is a beautiful day. 今天是美好的一天。'
>>> tokenizer.encode_batch(["Today is a beautiful day.", " 今天是美好的一天。"])
[[33520, 4600, 332, 59219, 21509, 47], [33, 10381, 11639, 13091, 15597, 11685, 14734, 10250, 11639, 10080]]
```## Performance and Validity Test
We compared the encoding results of the Rust RWKV Tokenizer and the original tokenizer using
the English Wikipedia and Chinese poetries datasets. Both results are identical. The Rust RWKV Tokenizer also
passes [the original tokenizer's unit test](https://github.com/BlinkDL/ChatRWKV/blob/main/tokenizer/rwkv_tokenizer.py).
The following steps describe how to do the unit test:
```
$ pip install pytest pyrwkv-tokenizer
$ git clone https://github.com/cahya-wirawan/rwkv-tokenizer.git
$ cd rwkv-tokenizer
$ pytest
```We did a performance comparison on [the simple English Wikipedia dataset 20220301.simple](https://huggingface.co/datasets/legacy-datasets/wikipedia)* among following tokenizer:
- The original RWKV tokenizer (BlinkDL)
- Huggingface implementaion of RWKV tokenizer
- Huggingface LLama tokenizer
- Huggingface Mistral tokenizer
- Bert tokenizer
- OpenAI Tiktoken
- The Rust RWKV tokenizerThe comparison is done using this [jupyter notebook](tools/rwkv_tokenizers.ipynb) in a M2 Mac mini. The Rust RWKV
tokenizer is around 17x faster than the original tokenizer and 9.6x faster than OpenAI Tiktoken.
We updated the Rust RWKV world tokenizer to support batch encoding with multithreading. We ran the same comparison
[script](tools/test_tiktoken-huggingface-rwkv.py) from the [Huggingface Tokenizers](https://github.com/huggingface/tokenizers)
with the additional rwkv tokenizer. The result shows that the rwkv world tokenizer is significantly faster than
the Tiktoken and Huggingface tokenizers in all numbers of threads and document sizes (on average, its speed is ten times faster).
*The simple English Wikipedia dataset can be downloaded as jsonl file from
https://huggingface.co/datasets/cahya/simple-wikipedia/resolve/main/simple-wikipedia.jsonl?download=true## Tools using this tokenizer
We also created the [json2bin](https://github.com/cahya-wirawan/json2bin) application to convert datasets from JSONL format
into binidx format, a data format used for training RWKV models. It uses multithreading to scale up the performance and
can convert a dataset more than 70 times faster (around 360 MB/s) than the original
[json2binidx_tool](https://github.com/Abel2076/json2binidx_tool) written in Python.## Changelog
- Version 0.9.1
- Added utf8 error handling to decoder
- Version 0.9.0
- Added multithreading for the function encode_batch()
- Added batch/multithreading comparison
- Version 0.3.0
- Fixed the issue where some characters were not encoded correctly*This tokenizer is my very first Rust program, so it might still have many bugs and silly codes :-)*