https://github.com/argilla-io/notus
Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first approach
https://github.com/argilla-io/notus
alignment-handbook dpo fine-tuning lm-alignment preference-data trl zephyr
Last synced: 22 days ago
JSON representation
Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first approach
- Host: GitHub
- URL: https://github.com/argilla-io/notus
- Owner: argilla-io
- License: mit
- Created: 2023-11-16T10:01:23.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-01-15T16:11:38.000Z (over 1 year ago)
- Last Synced: 2025-04-07T08:37:38.075Z (about 2 months ago)
- Topics: alignment-handbook, dpo, fine-tuning, lm-alignment, preference-data, trl, zephyr
- Language: Python
- Homepage:
- Size: 4.43 MB
- Stars: 167
- Watchers: 6
- Forks: 14
- Open Issues: 2
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
💨 Notus
![]()
---
Notus is a collection of fine-tuned models using SFT, DPO, SFT+DPO, and/or any other RLAIF/RLHF techniques; following a data-first, human-centric approach, since that's what we do best at Argilla.
Notus models are intended to be used as assistants via chat-like applications, and are evaluated with Chat (MT-Bench, AlpacaEval) and Academic (Open LLM Leaderboard) benchmarks for a direct comparison with other similar LLMs.
Notus name comes from the ancient Greek god Notus, as a wink to Zephyr, which comes from the ancient Greek god Zephyrus; with the difference that Notus is the god of the south wind, and Zephyr the god of the west wind. More information at https://en.wikipedia.org/wiki/Anemoi.
Being able to fine-tune LLMs while still keeping a data-first approach wouldn't have been possible without the inestimable help of the open source community and all the amazing resources out there intended for the general public. We are very grateful for that, and we hope that our work can be useful for others as well.
🎩 h/t HuggingFace H4 team for their amazing work with [`alignment-handbook`](https://github.com/huggingface/alignment-handbook), and also for the fruitful discussions we had with them and their support.
## News
* **December 1st, 2023**: Notus 7B v1 is released! 🎉 Using the same DPO fine-tuning approach as Zephyr 7B Beta, but changing the data source from UltraFeedback to binarize it using the average of the different criterias, instead of the critique score. Notus 7B improved in both AlpacaEval and LM Eval Harness compared to Zephyr 7B Beta, while for MT-Bench the results were on par. More information at [`v1/`](./v1/).
## Resources
### 🤗 HuggingFace Hub Collection
![]()
Available at: https://huggingface.co/collections/argilla/notus-7b-v1-655529d7c73cb6c830e9555a
### 💬 Chat UI
![]()
Chat with Notus at https://argilla-notus-chat-ui.hf.space/ (powered by https://github.com/huggingface/chat-ui)
## Citation
Since most of the content is ported / adapted from [`huggingface/alignment-handbook`](https://github.com/huggingface/alignment-handbook), we recommend citing their work.
```bibtex
@misc{alignment_handbook2023,
author = {Lewis Tunstall and Edward Beeching and Nathan Lambert and Nazneen Rajani and Alexander M. Rush and Thomas Wolf},
title = {The Alignment Handbook},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/huggingface/alignment-handbook}}
}
```Additionally, if you find any of the contents within this repository useful, please feel free to use the following BibTeX cite as well:
```bibtex
@misc{notus2023,
author = {Alvaro Bartolome and Gabriel Martin and Daniel Vila},
title = {Notus},
year = {2023},
publisher = {GitHub},
journal = {GitHub Repository},
howpublished = {\url{https://github.com/argilla-io/notus}}
}
```> [!NOTE]
> Alphabetically ordered by last name due to equal contribution.