Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/furk4neg3/ibm-pretraining-llms-huggingface
A hands-on project on pre-training and fine-tuning large language models (LLMs) using Hugging Face. Includes loading pretrained models, inferencing, and self-supervised fine-tuning techniques to customize LLMs for specific NLP applications.
https://github.com/furk4neg3/ibm-pretraining-llms-huggingface
Last synced: about 2 months ago
JSON representation
A hands-on project on pre-training and fine-tuning large language models (LLMs) using Hugging Face. Includes loading pretrained models, inferencing, and self-supervised fine-tuning techniques to customize LLMs for specific NLP applications.
- Host: GitHub
- URL: https://github.com/furk4neg3/ibm-pretraining-llms-huggingface
- Owner: furk4neg3
- Created: 2024-11-11T11:40:53.000Z (about 2 months ago)
- Default Branch: main
- Last Pushed: 2024-11-11T12:04:55.000Z (about 2 months ago)
- Last Synced: 2024-11-11T12:34:09.005Z (about 2 months ago)
- Language: Jupyter Notebook
- Size: 0 Bytes
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Pre-training Large Language Models with Hugging Face
This repository contains a project on pre-training large language models (LLMs) using the Hugging Face library. The project covers loading pretrained models, making inferences using Hugging Face `Pipeline`, and self-supervised fine-tuning of LLMs to adapt them for specific applications.
## Overview
This project explores:
- Loading Hugging Face pretrained models
- Making inferences with the Hugging Face `Pipeline`
- Self-supervised fine-tuning of LLMs on custom datasets
- Saving and loading models for later fine-tuning## Table of Contents
1. [Introduction](#introduction)
2. [Objectives](#objectives)
3. [Requirements](#requirements)
4. [References](#references)## Introduction
This project demonstrates how to leverage Hugging Face to load, pretrain, and fine-tune large language models. By the end of this project, you will understand how to:
- Load pretrained models from Hugging Face and make inferences
- Train LLMs on custom data to tailor them to specific NLP tasks
- Save models for future fine-tuning and deployment## Objectives
By completing this project, you will be able to:
1. Load pretrained LLMs and perform inference
2. Self-supervise LLMs for fine-tuning on custom data
3. Store and reload models for tailored NLP applications## Requirements
- Python 3.7+
- Hugging Face Transformers Library
- PyTorch## References
- [IBM AI Engineering Professional Certificate](https://www.coursera.org/professional-certificates/ai-engineer?)
- [Generative AI Engineering with LLMs Specialization](https://www.coursera.org/specializations/generative-ai-engineering-with-llms)
- [Generative AI Engineering and Fine-Tuning Transformers](https://www.coursera.org/learn/generative-ai-engineering-and-fine-tuning-transformers?specialization=generative-ai-engineering-with-llms)