Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/tobistudio/finetuning-openai
https://github.com/tobistudio/finetuning-openai
Last synced: 3 days ago
JSON representation
- Host: GitHub
- URL: https://github.com/tobistudio/finetuning-openai
- Owner: tobistudio
- Created: 2023-11-07T11:10:25.000Z (about 1 year ago)
- Default Branch: master
- Last Pushed: 2024-04-10T09:14:39.000Z (9 months ago)
- Last Synced: 2024-11-11T06:48:24.094Z (2 months ago)
- Language: Python
- Size: 550 KB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# FineTuning-OpenAI
Fine-tuning OpenAI text generation models can make them better for specific applications, but it requires a careful investment of time and effort. We recommend first attempting to get good results with prompt engineering, prompt chaining (breaking complex tasks into multiple prompts), and function calling, with the key reasons being:
We can fine tuning using
## gpt-3.5-turbo-1006 (recommended)
## babbage-002
## davinci-002
## gpt-4-0613There are many tasks at which our models may not initially appear to perform well, but results can be improved with the right prompts - thus fine-tuning may not be necessary
Iterating over prompts and other tactics has a much faster feedback loop than iterating with fine-tuning, which requires creating datasets and running training jobs
In cases where fine-tuning is still necessary, initial prompt engineering work is not wasted - we typically see best results when using a good prompt in the fine-tuning data (or combining prompt chaining / tool use with fine-tuning)
Our prompt engineering guide provides a background on some of the most effective strategies and tactics for getting better performance without fine-tuning. You may find it helpful to iterate quickly on prompts in our playground.