Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/khaledsharif/corpo-llama
fine-tune llama3 models using ORPO
https://github.com/khaledsharif/corpo-llama
finetuning llama3 llms
Last synced: 30 days ago
JSON representation
fine-tune llama3 models using ORPO
- Host: GitHub
- URL: https://github.com/khaledsharif/corpo-llama
- Owner: KhaledSharif
- License: mit
- Created: 2024-05-14T22:28:42.000Z (7 months ago)
- Default Branch: main
- Last Pushed: 2024-05-16T19:17:34.000Z (7 months ago)
- Last Synced: 2024-11-19T09:06:06.164Z (about 1 month ago)
- Topics: finetuning, llama3, llms
- Language: Python
- Homepage:
- Size: 112 KB
- Stars: 0
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# corpo-llama
![corporate llama](hero.jpeg)
ORPO (Odds Ratio Preference Optimization) is a novel and innovative fine-tuning technique in machine learning that combines the standard supervised fine-tuning and preference alignment stages into a single procedure. This repository helps users fine-tune Llama-3-8B models using various configurations.
---
An odds ratio (OR) is a statistic that quantifies the strength of an association between two events. It represents the odds of one event occurring given another exposure compared to the odds of the first event happening in the absence of the second exposure.
Odds ratios can be used to express how much more likely it is for an event to occur under different conditions, making them useful tools for comparing risks across different situations.
In machine learning and natural language processing, odds ratio statistics are employed as a means to optimize model performance by analyzing the relationship between input features and output predictions. In ORPO specifically, odds ratios help guide the optimization process during fine-tuning stages, leading to more efficient training procedures.