Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/allenai/dolma
Data and tools for generating and inspecting OLMo pre-training data.
https://github.com/allenai/dolma
data-processing large-language-models llm machile-learning nlp
Last synced: about 2 months ago
JSON representation
Data and tools for generating and inspecting OLMo pre-training data.
- Host: GitHub
- URL: https://github.com/allenai/dolma
- Owner: allenai
- License: apache-2.0
- Created: 2023-06-20T20:37:39.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-10-29T23:59:27.000Z (about 2 months ago)
- Last Synced: 2024-10-30T00:44:35.963Z (about 2 months ago)
- Topics: data-processing, large-language-models, llm, machile-learning, nlp
- Language: Python
- Homepage: https://allenai.github.io/dolma/
- Size: 61.7 MB
- Stars: 972
- Watchers: 20
- Forks: 107
- Open Issues: 32
-
Metadata Files:
- Readme: README.md
- License: LICENSE
- Citation: CITATION.cff
Awesome Lists containing this project
- StarryDivineSky - allenai/dolma - - 此存储库包含 Dolma Toolkit 的源代码。 (A01_文本生成_文本对话 / 大语言对话模型及数据)
README
Dolma is two things:
1. **Dolma Dataset**: an open dataset of 3 trillion tokens from a diverse mix of web content, academic publications, code, books, and encyclopedic materials.
2. **Dolma Toolkit**: a high-performance toolkit for curating datasets for language modeling -- this repo contains the source code for the Dolma Toolkit.## Dolma Dataset
Dolma is an open dataset of 3 trillion tokens from a diverse mix of web content, academic publications, code, books, and encyclopedic materials.
It was created as a training corpus for [OLMo](https://allenai.org/olmo), a language model from the [Allen Institute for AI](https://allenai.org) (AI2).Dolma is available for download on the HuggingFace 🤗 Hub: [`huggingface.co/datasets/allenai/dolma`](https://huggingface.co/datasets/allenai/dolma). Dolma is licensed under **[ODC-BY](https://opendatacommons.org/licenses/by/1-0/)**; see our [blog post](https://blog.allenai.org/making-a-switch-dolma-moves-to-odc-by-8f0e73852f44) for explanation.
You can also read more about Dolma in [our announcement](https://blog.allenai.org/dolma-3-trillion-tokens-open-llm-corpus-9a0ff4b8da64), as well as by consulting its [data sheet](docs/assets/dolma-v0_1-20230819.pdf).
## Dolma Toolkit
This repository houses the Dolma Toolkit, which enables curation of large datasets for (pre)-training ML models. Its key features are:
1. **High Performance** ⚡: Can process billions of documents concurrently thanks to built-in parallelism.
2. **Portability** 🧳: Works on a single machine, a cluster, or cloud environment.
3. **Built-In Taggers** 🏷: Includes ready-to-use taggers commonly used to curate datasets such as [Gopher](https://arxiv.org/abs/2112.11446), [C4](https://arxiv.org/abs/1910.10683), and [OpenWebText](https://openwebtext2.readthedocs.io/en/latest/).
4. **Fast Deduplication** 🗑: Speedy document deduplication using a Rust Bloom filter.
5. **Extensibility** 🧩 & **Cloud Support** ☁: Supports custom taggers and AWS S3-compatible locations.To install, simply type `pip install dolma` in your terminal.
To learn more about how to use the Dolma Toolkit, please visit the [documentation](/docs).
## Citation
If you use the Dolma dataset or toolkit, please cite the following items:
```bibtex
@article{dolma,
title = {{Dolma: An Open Corpus of Three Trillion Tokens for Language Model Pretraining Research}},
author={Luca Soldaini and Rodney Kinney and Akshita Bhagia and Dustin Schwenk and David Atkinson and Russell Authur and Ben Bogin and Khyathi Chandu and Jennifer Dumas and Yanai Elazar and Valentin Hofmann and Ananya Harsh Jha and Sachin Kumar and Li Lucy and Xinxi Lyu and Nathan Lambert and Ian Magnusson and Jacob Morrison and Niklas Muennighoff and Aakanksha Naik and Crystal Nam and Matthew E. Peters and Abhilasha Ravichander and Kyle Richardson and Zejiang Shen and Emma Strubell and Nishant Subramani and Oyvind Tafjord and Pete Walsh and Luke Zettlemoyer and Noah A. Smith and Hannaneh Hajishirzi and Iz Beltagy and Dirk Groeneveld and Jesse Dodge and Kyle Lo},
year={2024},
journal={arXiv preprint},
url={https://arxiv.org/abs/2402.00159}
}
```