https://github.com/ksm26/improving-accuracy-of-llm-applications
The course equips developers with techniques to enhance the reliability of LLMs, focusing on evaluation, prompt engineering, and fine-tuning. Learn to systematically improve model accuracy through hands-on projects, including building a text-to-SQL agent and applying advanced fine-tuning methods.
https://github.com/ksm26/improving-accuracy-of-llm-applications
evaluation-framework instruction-fine-tuning iterative-fine-tuning llama-models llm-accuracy lora memory-tuning model-reliability mome performance-optimization prompt-engineering self-reflection text-to-sql
Last synced: 3 months ago
JSON representation
The course equips developers with techniques to enhance the reliability of LLMs, focusing on evaluation, prompt engineering, and fine-tuning. Learn to systematically improve model accuracy through hands-on projects, including building a text-to-SQL agent and applying advanced fine-tuning methods.
- Host: GitHub
- URL: https://github.com/ksm26/improving-accuracy-of-llm-applications
- Owner: ksm26
- Created: 2024-08-23T12:54:15.000Z (10 months ago)
- Default Branch: main
- Last Pushed: 2024-08-29T08:54:53.000Z (10 months ago)
- Last Synced: 2024-08-29T17:16:55.388Z (10 months ago)
- Topics: evaluation-framework, instruction-fine-tuning, iterative-fine-tuning, llama-models, llm-accuracy, lora, memory-tuning, model-reliability, mome, performance-optimization, prompt-engineering, self-reflection, text-to-sql
- Language: Jupyter Notebook
- Homepage: https://www.deeplearning.ai/short-courses/improving-accuracy-of-llm-applications/
- Size: 1.2 MB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# π― [Improving Accuracy of LLM Applications](https://www.deeplearning.ai/short-courses/improving-accuracy-of-llm-applications/)
Welcome to the "Improving Accuracy of LLM Applications" course! π The course provides a systematic approach to enhance the accuracy and reliability of your LLM applications.
![]()
## π Course Summary
Many developers struggle with inconsistent results in LLM applications. π This course is designed to address these challenges by offering hands-on experience in improving accuracy through evaluation, prompt engineering, self-reflection, and fine-tuning techniques.**What Youβll Do:**
1. π§ **SQL Agent Development**: Build a text-to-SQL agent and simulate situations where it hallucinates to begin the evaluation process.
![]()
![]()
2. π **Evaluation Framework**: Create a robust framework to systematically measure performance, including criteria for good evaluations, best practices, and developing an evaluation score.
![]()
3. π― **Instruction Fine-tuning**: Learn how instruction fine-tuning helps LLMs follow instructions more accurately and how memory fine-tuning embeds facts to reduce hallucinations.
4. π **Performance-Efficient Fine-tuning (PEFT)**: Discover advanced techniques like Low-Rank Adaptation (LoRA) and Mixture of Memory Experts (MoME) to reduce training time while improving model performance.
5. π **Iterative Fine-tuning**: Go through an iterative process of generating training data, fine-tuning, and applying practical tips to increase model accuracy.## π Key Points
- π οΈ **Systematic Improvement**: Learn development steps, from evaluation, prompting, self-reflection, and fine-tuning, to improve your modelβs reliability and accuracy.
- π§ **Memory Tuning**: Enhance your model's performance by embedding facts to reduce hallucinations.
- π **Llama Models**: Use the Llama 3-8b model to build an LLM application that converts text to SQL with a custom schema.## π©βπ« About the Instructors
- π©βπΌ **Sharon Zhou**: Co-Founder and CEO of Lamini, Sharon brings her expertise in LLM development and fine-tuning.
- π¨βπΌ **Amit Sangani**: Senior Director of Partner Engineering at Meta, Amit shares valuable insights on engineering reliable LLM applications.π To enroll in the course or for further information, visit π [deeplearning.ai](https://www.deeplearning.ai/short-courses/).