An open API service indexing awesome lists of open source software.

https://github.com/ksm26/improving-accuracy-of-llm-applications

The course equips developers with techniques to enhance the reliability of LLMs, focusing on evaluation, prompt engineering, and fine-tuning. Learn to systematically improve model accuracy through hands-on projects, including building a text-to-SQL agent and applying advanced fine-tuning methods.
https://github.com/ksm26/improving-accuracy-of-llm-applications

evaluation-framework instruction-fine-tuning iterative-fine-tuning llama-models llm-accuracy lora memory-tuning model-reliability mome performance-optimization prompt-engineering self-reflection text-to-sql

Last synced: 3 months ago
JSON representation

The course equips developers with techniques to enhance the reliability of LLMs, focusing on evaluation, prompt engineering, and fine-tuning. Learn to systematically improve model accuracy through hands-on projects, including building a text-to-SQL agent and applying advanced fine-tuning methods.

Awesome Lists containing this project

README

        

# 🎯 [Improving Accuracy of LLM Applications](https://www.deeplearning.ai/short-courses/improving-accuracy-of-llm-applications/)

Welcome to the "Improving Accuracy of LLM Applications" course! πŸš€ The course provides a systematic approach to enhance the accuracy and reliability of your LLM applications.



## πŸ“˜ Course Summary
Many developers struggle with inconsistent results in LLM applications. πŸ˜“ This course is designed to address these challenges by offering hands-on experience in improving accuracy through evaluation, prompt engineering, self-reflection, and fine-tuning techniques.

**What You’ll Do:**
1. 🧠 **SQL Agent Development**: Build a text-to-SQL agent and simulate situations where it hallucinates to begin the evaluation process.




2. πŸ“Š **Evaluation Framework**: Create a robust framework to systematically measure performance, including criteria for good evaluations, best practices, and developing an evaluation score.



3. 🎯 **Instruction Fine-tuning**: Learn how instruction fine-tuning helps LLMs follow instructions more accurately and how memory fine-tuning embeds facts to reduce hallucinations.
4. πŸš€ **Performance-Efficient Fine-tuning (PEFT)**: Discover advanced techniques like Low-Rank Adaptation (LoRA) and Mixture of Memory Experts (MoME) to reduce training time while improving model performance.
5. πŸ”„ **Iterative Fine-tuning**: Go through an iterative process of generating training data, fine-tuning, and applying practical tips to increase model accuracy.

## πŸ”‘ Key Points
- πŸ› οΈ **Systematic Improvement**: Learn development steps, from evaluation, prompting, self-reflection, and fine-tuning, to improve your model’s reliability and accuracy.
- 🧠 **Memory Tuning**: Enhance your model's performance by embedding facts to reduce hallucinations.
- πŸ‘ **Llama Models**: Use the Llama 3-8b model to build an LLM application that converts text to SQL with a custom schema.

## πŸ‘©β€πŸ« About the Instructors
- πŸ‘©β€πŸ’Ό **Sharon Zhou**: Co-Founder and CEO of Lamini, Sharon brings her expertise in LLM development and fine-tuning.
- πŸ‘¨β€πŸ’Ό **Amit Sangani**: Senior Director of Partner Engineering at Meta, Amit shares valuable insights on engineering reliable LLM applications.

πŸ”— To enroll in the course or for further information, visit πŸ“š [deeplearning.ai](https://www.deeplearning.ai/short-courses/).