https://github.com/luluw8071/llama2-llm-7b-text-generation
Text Generation on Pre-Trained NousResearch’s Llama-2-7b-chat-hf using guanaco-llama2-1k dataset
https://github.com/luluw8071/llama2-llm-7b-text-generation
4bit-quantize huggingface llama2 llm pytorch text-generation
Last synced: 3 months ago
JSON representation
Text Generation on Pre-Trained NousResearch’s Llama-2-7b-chat-hf using guanaco-llama2-1k dataset
- Host: GitHub
- URL: https://github.com/luluw8071/llama2-llm-7b-text-generation
- Owner: LuluW8071
- Created: 2024-02-15T16:09:11.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-07-28T17:31:21.000Z (10 months ago)
- Last Synced: 2025-01-11T02:47:46.208Z (4 months ago)
- Topics: 4bit-quantize, huggingface, llama2, llm, pytorch, text-generation
- Language: Jupyter Notebook
- Homepage:
- Size: 42 KB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Llama2-LLM-Text-Generation
[**Llama 2**](https://llama.meta.com/llama2) is a collection of second-generation open-source Large Language Models (LLMs) from Meta, designed to handle a wide range of natural language processing tasks. These models range in scale from `7 billion to 70 billion parameters`.
It is optimized for dialogue, has shown similar performance to popular closed-source models like **ChatGPT** and **PaLM**.The repository implementation contains `Text Generation on Pre-Trained NousResearch’s Llama-2-7b-chat-hf` using [`guanaco-llama2-1k`](https://huggingface.co/datasets/mlabonne/guanaco-llama2-1k) dataset.
---
### Note:
This repo just contains my understanding on `Llama2` LLM and trying to fine tune for faster text generation with minimal consumption of GPU compute resource.
_Feel free to send issues if you face any problem._