https://github.com/pialghosh2233/llama2-healthcare-chat-bangla
https://github.com/pialghosh2233/llama2-healthcare-chat-bangla
bangla bangla-nlp chatbot generative-ai llama llm
Last synced: 8 months ago
JSON representation
- Host: GitHub
- URL: https://github.com/pialghosh2233/llama2-healthcare-chat-bangla
- Owner: PialGhosh2233
- Created: 2024-12-08T16:24:40.000Z (12 months ago)
- Default Branch: main
- Last Pushed: 2024-12-08T17:18:18.000Z (12 months ago)
- Last Synced: 2025-02-17T20:44:31.060Z (10 months ago)
- Topics: bangla, bangla-nlp, chatbot, generative-ai, llama, llm
- Language: Jupyter Notebook
- Homepage:
- Size: 3.28 MB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
**Fine tuned Llama2 for Healthcare Q/A for Bangla**
#### Description:
This repository contains the code and configurations for fine-tuning **LLaMA-2-7b-chat** on a Bangla medical Q&A dataset using QLoRA (Quantized LoRA). The fine-tuned model is optimized to assist in generating accurate and context-aware responses for healthcare-related queries in Bangla.
---
#### About the dataset
Here is the dataset [doctor_qa_bangla](https://huggingface.co/datasets/shetumohanto/doctor_qa_bangla)
---
#### Features:
- **Model Architecture:** LLaMA-2-7b-chat fine-tuned using LoRA for efficient parameter updates.
- **Low-Rank Adaptation:** LoRA with `r=64`, dropout, and scaled parameters for improved training efficiency.
- **Quantization:** 4-bit precision (nf4 quantization) for reduced memory usage and accelerated training.
- **Training Pipeline:**
- Supervised fine-tuning using Hugging Face's `transformers` and `trl` libraries.
- Customizable training hyperparameters (e.g., learning rate, gradient accumulation, and max sequence length).
- **Text Generation Pipeline:** Seamless inference setup for healthcare-related queries in Bangla.
---
#### Requirements:
- Python 3.8+
- GPUs with CUDA support.
- Libraries: `accelerate`, `peft`, `transformers`, `trl`, `bitsandbytes`, and `datasets`.
---
#### Training Highlights:
- **Efficient Training:** Utilizes QLoRA for memory-efficient fine-tuning on consumer-grade GPUs.
- **Customizable Training:** Easily tweak training settings like batch size, learning rate, and sequence length.
---
#### Applications:
- Bangla-English conversational agents for healthcare.
- Educational tools for bilingual healthcare training.
---
Contributions are welcome to improve the model. 🚀