An open API service indexing awesome lists of open source software.

https://github.com/monk1337/nanopeft

The simplest repository & Neat implementation of different Lora methods for training/fine-tuning Transformer-based models (i.e., BERT, GPTs). [ Research purpose ]
https://github.com/monk1337/nanopeft

huggingface llama llm lora low-rank-adaptation mistral peft qlora quantization

Last synced: 2 months ago
JSON representation

The simplest repository & Neat implementation of different Lora methods for training/fine-tuning Transformer-based models (i.e., BERT, GPTs). [ Research purpose ]

Awesome Lists containing this project

README

        


NanoPeft



The simplest repository & Neat implementation of different Lora methods for training/fine-tuning Transformer-based models (i.e., BERT, GPTs).



Apache 2.0 license.


PyPI version


http://makeapullrequest.com


colab



# Why NanoPeft?
- PEFT & LitGit are great libraries However, Hacking the Hugging Face PEFT (Parameter-Efficient Fine-Tuning) or LitGit packages seems like a lot of work to integrate a new LoRA method quickly and benchmark it.
- By keeping the code so simple, it is very easy to hack to your needs, add new LoRA methods from papers in the layers/ directory, and fine-tune easily as per your needs.
- This is mostly for experimental/research purposes, not for scalable solutions.

## Installation

### With pip

You should install NanoPeft using Pip command

```bash
pip3 install git+https://github.com/monk1337/NanoPeft.git
```