https://github.com/lazerlambda/promptzl
Turn LLMs into zero-shot PyTorch classifiers!
https://github.com/lazerlambda/promptzl
classification few-shot huggingface large-language-model large-language-models llama llm machine-learning ml prompt prompt-engineering prompt-toolkit pytorch qwen transformer-models transformers transformers-library zero-shot
Last synced: 6 months ago
JSON representation
Turn LLMs into zero-shot PyTorch classifiers!
- Host: GitHub
- URL: https://github.com/lazerlambda/promptzl
- Owner: LazerLambda
- License: apache-2.0
- Created: 2024-06-23T19:00:17.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2025-01-17T18:24:58.000Z (9 months ago)
- Last Synced: 2025-01-17T19:34:08.616Z (9 months ago)
- Topics: classification, few-shot, huggingface, large-language-model, large-language-models, llama, llm, machine-learning, ml, prompt, prompt-engineering, prompt-toolkit, pytorch, qwen, transformer-models, transformers, transformers-library, zero-shot
- Language: Python
- Homepage: https://promptzl.readthedocs.io/en/latest/
- Size: 527 KB
- Stars: 4
- Watchers: 3
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE.md
Awesome Lists containing this project
README
[][#github-license]
[][#docs-package]

[][#pypi-package]
[][#pypi-package][#github-license]: https://github.com/LazerLambda/Promptzl/blob/main/LICENSE.md
[#docs-package]: https://promptzl.readthedocs.io/en/latest/
[#pypi-package]: https://pypi.org/project/promptzl/Pr🥨mptzl
Turn state-of-the-art LLMs into zero+-shot PyTorch classifiers in just a few lines of code.
Promptzl offers:
- 🤖 Zero+-shot classification with LLMs
- 🤗 Turning [causal](https://huggingface.co/models?pipeline_tag=text-generation>) and [masked](https://huggingface.co/models?pipeline_tag=fill-mask>) LMs into classifiers without any training
- 📦 Batch processing on your device for efficiency
- 🚀 Speed-up over calling an online API
- 🔎 Transparency and accessibility by using the model locally
- 📈 Distribution over labels
- ✂️ No need to extract the predictions from the answer.For more information, check out the [**official documentation**.](https://promptzl.readthedocs.io/en/latest/)
## Installation
`pip install -U promptzl`
## Getting Started
In just a few lines of code, you can transform a LLM of choice into an old-school classifier with all it's desirable properties:
Set up the dataset:
```python
from datasets import Datasetdataset = Dataset.from_dict(
{
'text': [
"The food was absolutely wonderful, from preparation to presentation, very pleasing.",
"The service was a bit slow, but the food made up for it. Highly recommend the pasta!",
"The restaurant was too noisy and the food was mediocre at best. Not worth the price.",
],
'label': [1, 1, 0]
}
)
```Define a prompt for guiding the language model to the correct predictions:
```python
from promptzl import FnVbzPair, Vbz
prompt = FnVbzPair(
lambda e: f"""Restaurant review classification into categories 'positive' or 'negative'.'Best pretzls in town!'='positive'
'Rude staff, horrible food.'='negative''{e['text']}'=""",
Vbz({0: ["negative"], 1: ["positive"]}))
```Initialize a model:
```python
from promptzl import CausalLM4Classification
model = CausalLM4Classification(
'HuggingFaceTB/SmolLM2-1.7B',
prompt=prompt)
```Classify the data:
```python
from sklearn.metrics import accuracy_score
output = model.classify(dataset, show_progress_bar=True, batch_size=1)
accuracy_score(dataset['label'], output.predictions)
1.0
```For more detailed tutorials, check out the [documentation](https://promptzl.readthedocs.io/en/latest/)!