Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/zouharvi/tokenization-scorer
Simple-to-use scoring function for arbitrarily tokenized texts.
https://github.com/zouharvi/tokenization-scorer
bpe segmentation subword tokenization
Last synced: 6 days ago
JSON representation
Simple-to-use scoring function for arbitrarily tokenized texts.
- Host: GitHub
- URL: https://github.com/zouharvi/tokenization-scorer
- Owner: zouharvi
- License: mit
- Created: 2023-04-29T14:44:20.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-09-12T10:07:48.000Z (2 months ago)
- Last Synced: 2024-10-12T18:21:13.087Z (about 1 month ago)
- Topics: bpe, segmentation, subword, tokenization
- Language: Python
- Homepage:
- Size: 28.3 KB
- Stars: 31
- Watchers: 1
- Forks: 3
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# `tokenization-scorer` [![PyPI Version](https://img.shields.io/pypi/v/tokenization-scorer.svg)](https://pypi.python.org/pypi/tokenization-scorer)
Simple package for evaluating text tokenizations.
The input is a text (list of files or stdin) and output a single number.
The higher the number, the better the tokenization.
The intended workflow is to try multiple tokenizations and select the one with the highest number.It can be used from the command line:
```bash
pip3 install tokenization-scorertokenization-scorer -i en-de.tokenized_with_unigramlm.{en,de}
> 0.4826tokenization-scorer -i en-de.tokenized_with_wordpiece.{en,de}
> 0.5047
```or within Python:
```python
import tokenization_scorer
text1 = "pick @@ed pick @@l @@ed pick @@les"
tokenization_scorer.score(text1, metric="renyi", power=2.5)
> 0.8031528501359657text2 = "pick @@e @@d pick @@l @@e @@d pick @@l @@e @@s"
tokenization_scorer.score(text2, metric="renyi", power=2.5)
> 0.9105681923824472
```Use `tokenization-scorer -h` to get an overview of supported metrics.
This package is a side-product of the paper [Tokenization and the Noiseless Channel](https://aclanthology.org/2023.acl-long.284/).```
@inproceedings{tokenization_noiseless,
title={Tokenization and the Noiseless Channel},
author={Zouhar, Vilém and Meister, Clara and Gastaldi, Juan Luis and Sachan, Mrinmaya and Cotterell, Ryan},
booktitle={Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics},
year={2023},
url={https://aclanthology.org/2023.acl-long.284/},
}
```