Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/furk4neg3/ibm-transformer-fine-tuning-pytorch-huggingface
A project demonstrating fine-tuning techniques for large language models (LLMs) using PyTorch and Hugging Face’s SFTTrainer module. Covers data preparation, training loop implementation, task-specific fine-tuning, and performance evaluation with PyTorch and Hugging Face.
https://github.com/furk4neg3/ibm-transformer-fine-tuning-pytorch-huggingface
Last synced: about 2 months ago
JSON representation
A project demonstrating fine-tuning techniques for large language models (LLMs) using PyTorch and Hugging Face’s SFTTrainer module. Covers data preparation, training loop implementation, task-specific fine-tuning, and performance evaluation with PyTorch and Hugging Face.
- Host: GitHub
- URL: https://github.com/furk4neg3/ibm-transformer-fine-tuning-pytorch-huggingface
- Owner: furk4neg3
- Created: 2024-11-11T12:35:41.000Z (about 2 months ago)
- Default Branch: main
- Last Pushed: 2024-11-11T12:40:35.000Z (about 2 months ago)
- Last Synced: 2024-11-11T13:32:31.790Z (about 2 months ago)
- Language: Jupyter Notebook
- Size: 0 Bytes
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Fine-Tuning Transformers with PyTorch and Hugging Face
This project demonstrates the process of loading, fine-tuning, and evaluating large language models (LLMs) using PyTorch and Hugging Face’s tools. It includes task-specific fine-tuning using Hugging Face’s `SFTTrainer` module and implementing a custom supervised training loop in PyTorch to build high-performing NLP models for specific use cases.
## Note
The kernel crashes when it reaches the final cell in the code, preventing any output from being generated.## Overview
This project covers:
- Loading pretrained LLMs from Hugging Face
- Implementing a custom supervised training loop in PyTorch
- Task-specific fine-tuning with the `SFTTrainer` module
- Model evaluation for optimized task performance## Table of Contents
1. [Introduction](#introduction)
2. [Objectives](#objectives)
8. [Requirements](#requirements)
9. [References](#references)## Introduction
This project introduces the process of fine-tuning large language models for specific NLP tasks using PyTorch and Hugging Face. By loading pretrained LLMs and fine-tuning them with the `SFTTrainer` module, this project demonstrates how to create powerful, task-specific language models and assess their performance.
## Objectives
By completing this project, you will:
1. Load pretrained LLMs and make inferences using Hugging Face
2. Fine-tune LLMs on task-specific data with `SFTTrainer`
3. Evaluate and compare model performance for various NLP tasks## Requirements
- Python 3.7+
- PyTorch
- Hugging Face Transformers Library## References
- [IBM AI Engineering Professional Certificate](https://www.coursera.org/professional-certificates/ai-engineer?)
- [Generative AI Engineering with LLMs Specialization](https://www.coursera.org/specializations/generative-ai-engineering-with-llms)
- [Generative AI Engineering and Fine-Tuning Transformers](https://www.coursera.org/learn/generative-ai-engineering-and-fine-tuning-transformers?specialization=generative-ai-engineering-with-llms)