https://github.com/afondiel/finetuning-llms-crash-course-dlai
Notes & Resources of LLMs Finetuning Crash Course from LAMINI.AI & DeepLearning.AI.
https://github.com/afondiel/finetuning-llms-crash-course-dlai
finetuning finetuning-llms llms lora peft peft-fine-tuning-llm
Last synced: 3 months ago
JSON representation
Notes & Resources of LLMs Finetuning Crash Course from LAMINI.AI & DeepLearning.AI.
- Host: GitHub
- URL: https://github.com/afondiel/finetuning-llms-crash-course-dlai
- Owner: afondiel
- Created: 2024-11-12T17:53:15.000Z (7 months ago)
- Default Branch: main
- Last Pushed: 2024-11-16T00:29:13.000Z (7 months ago)
- Last Synced: 2025-01-22T08:12:15.188Z (5 months ago)
- Topics: finetuning, finetuning-llms, llms, lora, peft, peft-fine-tuning-llm
- Language: Jupyter Notebook
- Homepage: https://www.deeplearning.ai/short-courses/finetuning-large-language-models/
- Size: 8.41 MB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# **Finetuning Large Language Models (LLMs) Crash Course (DeepLearning.AI)**
## Overview
Learn the fundamentals of finetuning a large language model (LLM). Understand how finetuning differs from prompt engineering, and when to use both. Get practical experience with real data sets, and how to use techniques for your own projects.
Instructor: [Sharon Zhou - @LAMINI_AI](https://x.com/realsharonzhou)
## Course Outline
- Introduction
- Why finetune
- Where finetuning fits in
- Instruction finetuning
- Data preparation
- Training process
- Evaluation and iteration
- Consideration on getting started now
- Conclusion## Lab: Lectures & Notebooks
|Chapters|Notebooks|Demos|
|------------------------------------------------------|-----------------------------------------------------------|-------------------------------|
|[Introduction](./lab/chapters/slides/00_intro/)|-|-|
|[Why finetune](./lab/chapters/slides/01_why_finetune/)|[01_Why_finetuning_lab_student.ipynb](./lab/notebooks/L1/01_Why_finetuning_lab_student.ipynb)|-|
|[Where finetuning fits in](./lab/chapters/slides/02_where_finetuning_fits_in/)|[02_Where_finetuning_fits_in_lab_student.ipynb](./lab/notebooks/L2/02_Where_finetuning_fits_in_lab_student.ipynb)|-|
|[Instruction finetuning](./lab/chapters/slides/03_instruction_finetuning/)|[03_Instruction_tuning_lab_student.ipynb](./lab/notebooks/L3/03_Instruction_tuning_lab_student.ipynb)|-|
|[Data preparation](./lab/chapters/slides/04_data_preparation/)|[04_Data_preparation_lab_student.ipynb](./lab/notebooks/L4/04_Data_preparation_lab_student.ipynb)|-|
|[Training process](./lab/chapters/slides/05_training_process/)|[05_Training_lab_student.ipynb](./lab/notebooks/L5/05_Training_lab_student.ipynb)|-|
|[Evaluation and iteration](./lab/chapters/slides/06_evaluation_and_iteration/)|[06_Evaluation_lab_student.ipynb](./lab/notebooks/L6/06_Evaluation_lab_student.ipynb)|-|
|[Consideration on getting started now](./lab/chapters/slides/07_consideration_on_getting_started_now/)|-|-|
|[Conclusion](#)|-|-|## References
- [Main Course - DeepLearning.AI](https://www.deeplearning.ai/short-courses/finetuning-large-language-models/)