Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/FreedomIntelligence/HuatuoGPT-II
HuatuoGPT2, One-stage Training for Medical Adaption of LLMs. (An Open Medical GPT)
https://github.com/FreedomIntelligence/HuatuoGPT-II
Last synced: about 1 month ago
JSON representation
HuatuoGPT2, One-stage Training for Medical Adaption of LLMs. (An Open Medical GPT)
- Host: GitHub
- URL: https://github.com/FreedomIntelligence/HuatuoGPT-II
- Owner: FreedomIntelligence
- Created: 2023-07-22T15:41:22.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-08-30T18:09:25.000Z (4 months ago)
- Last Synced: 2024-11-09T17:42:53.942Z (about 1 month ago)
- Language: Python
- Homepage:
- Size: 10.9 MB
- Stars: 361
- Watchers: 31
- Forks: 60
- Open Issues: 29
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- StarryDivineSky - FreedomIntelligence/HuatuoGPT-II - 4。开源7B、13B、34B版本。HuatuoGPT2 数据:发布部分预训练和微调指令。中医LLM评价:综合自动评价方法,对医学反应能力LLM和新鲜专业药师考试考核进行评价。 (A01_文本生成_文本对话 / 大语言对话模型及数据)
README
# HuatuoGPT2, One-stage Training for Medical Adaption of LLMs
HuatuoGPT-II
🖥️ Online Demo (7B) |⬇️ 7B Model |⬇️ 13B Model | ⬇️ 34B Model | 📃 Paper
### ✨ Latest News
- [07/10/2024]: 🎉🎉🎉 Our paper is accepted for [COLM 2024](https://colmweb.org/AcceptedPapers.html)!
- [06/24/2024] We have made all training data for HuatuoGPT2 publicly available. This includes the [Pretraining dataset](https://huggingface.co/datasets/FreedomIntelligence/HuatuoGPT2-Pretraining-Instruction) and the [SFT dataset](https://huggingface.co/datasets/FreedomIntelligence/HuatuoGPT2-SFT-GPT4-140K)).
- [01/10/2024] The HuatuoGPT2 model is now available on the [Wisemodel platform](https://www.wisemodel.cn/models/FreedomIntelligence/HuatuoGPT-II).
- [12/04/2023] We released the **code and dataset for our evaluation**.
- [11/24/2023] We released the **quantitative version** of HuatuoGPT-II.
- [11/21/2023] We released HuatuoGPT-II models. The HuatuoGPT-II will be available in **7B**, **13B**, and **34B** versions.
- [11/17/2023] We released the [HuatuoGPT-II paper](https://arxiv.org/abs/2311.09774), achieving a new **state-of-the-art** in Chinese medical applications! Try our [demo](https://www.huatuogpt.cn/)!## ⚡ Introduction
Hello! Welcome to the repository for [HuatuoGPT2](https://arxiv.org/abs/2311.09774).
HuatuoGPT2 employs an innovative domain adaptation method to significantly boost its medical knowledge and dialogue proficiency. It showcases state-of-the-art performance in several medical benchmarks, especially surpassing GPT-4 in expert evaluations and the fresh medical licensing exams.
The open-source release of HuatuoGPT-2 includes:
- **HuatuoGPT2 Model**: Open-sourcing of 7B, 13B, and 34B versions.
- **Training Code**: Training code for one-stage adaptation will be provided, enabling better model adaptation across various languages and domains.
- **HuatuoGPT2 Data**: Release of partial pre-training and fine-tuning instructions.
- **Evaluation for Chinese Medical LLM**: Comprehensive automatic evaluation methods for medical response capabilities of LLM and the fresh professional pharmacist exam assessment.Note that we're still actively organizing our code and data. Please stay tuned for updates coming soon!
## 🌟 Performance
Compared with representative open-source models and closed-source models (including GPT-4), HuatuoGPT2 showed impressive performance on medical benchmarks. Here, we present two of the results.
- **Expert Evaluation**: In assessments by medical professionals, HuatuoGPT-II's responses in Chinese medical contexts were favored over counterparts like GPT-4:
| **HuatuoGPT-II Win Rate** | **Win** | **Tie** | **Fail** |
| -------------------------------------- | ------- | ------- | -------- |
| **Single-round Medical Response** | | | |
| HuatuoGPT-II(7B) vs GPT-4 | **38** | 38 | 24 |
| HuatuoGPT-II(7B) vs ChatGPT | **52** | 33 | 15 |
| HuatuoGPT-II(7B) vs Baichuan2-13B-Chat | **63** | 19 | 18 |
| HuatuoGPT-II(7B) vs HuatuoGPT | **81** | 11 | 8 |
| **Multi-round Medical Dialogue** | | | |
| HuatuoGPT-II(7B) vs GPT-4 | **53** | 17 | 30 |
| HuatuoGPT-II(7B) vs ChatGPT | **56** | 11 | 33 |
| HuatuoGPT-II(7B) vs Baichuan2-13B-Chat | **63** | 19 | 18 |
| HuatuoGPT-II(7B) vs HuatuoGPT | **68** | 6 | 26 |- **The Fresh Medical Exams**: We collected the fresh 2023 Chinese National Pharmacist Licensure Examination, which started on October 21, 2023. This date is later than our data finalization. HuatuoGPT2 achieved the best results in this exam, as shown below.
## 👩⚕️ Model
### Model Access
Our model is now available on Huggingface. You can Try our model in https://www.huatuogpt.cn/.
| Model | Backbone | Checkpoint |
| -------------- | ------------------ | ------------- |
| HuatuoGPT2-7B | Baichuan2-7B-Base | [HF Lnik](https://huggingface.co/FreedomIntelligence/HuatuoGPT2-7B) |
| HuatuoGPT2-13B | Baichuan2-13B-Base | [HF Lnik](https://huggingface.co/FreedomIntelligence/HuatuoGPT2-13B) |
| HuatuoGPT2-34B | Yi-34B | [HF Lnik](https://huggingface.co/FreedomIntelligence/HuatuoGPT2-34B) |### Quantization Model
A quantized version of HuatuoGPT2 is also provided, allowing users with constrained memory or computing resources to access our HuatuoGPT2.
| Quantization | Backbone | Checkpoint |
| --------------------- | ------------- | ------------- |
| HuatuoGPT2-7B-4bits | Baichuan2-7B-Base | [HF Lnik](https://huggingface.co/FreedomIntelligence/HuatuoGPT2-7B-4bits) |
| HuatuoGPT2-7B-8bits | Baichuan2-7B-Base | [HF Lnik](https://huggingface.co/FreedomIntelligence/HuatuoGPT2-7B-8bits) |
| HuatuoGPT2-34B-4bits | Yi-34B | [HF Lnik](https://huggingface.co/FreedomIntelligence/HuatuoGPT2-34B-4bits) |
| HuatuoGPT2-34B-8bits | Yi-34B | [HF Lnik](https://huggingface.co/FreedomIntelligence/HuatuoGPT2-34B-8bits) |### Model Inference
```bash
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("FreedomIntelligence/HuatuoGPT2-7B", use_fast=True, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("FreedomIntelligence/HuatuoGPT2-7B", device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True)
messages = []
messages.append({"role": "user", "content": "肚子疼怎么办?"})
response = model.HuatuoChat(tokenizer, messages)
print(response)
```#### Inference with Command Line
```bash
python cli_demo.py --model_name FreedomIntelligence/HuatuoGPT2-7B
```## 📚 Data
We open source part of the training data.
| Data Type | # Training data | Link |
| --------------------------------------- | ------- | ------------------------------------------------------------ |
| Medical Fine-tuning Instruction (GPT-4) | 142,248 | [HF Link](https://huggingface.co/datasets/FreedomIntelligence/HuatuoGPT2-SFT-GPT4-140K) |
| Medical Pre-training Instruction | 5,286,308 | [HF Link](https://huggingface.co/datasets/FreedomIntelligence/HuatuoGPT2-Pretraining-Instruction) |## 🌈 One-stage adaption
### Data Unification
- HuatuoGPT2 transforms the pre-training corpus into (instruction, output) pairs using LLM. Utilize the script for Data Unification.
```Bash
python adaption/data_unification/rewrite.py
```### One-stage training
- We introduce a priority sampling approach, pre-processing data with this algorithm:
```bash
python adaption/one_stage_training/data_process.py
```- Then, training is conducted using one-stage training:
```Bash
bash adaption/one_stage_training/train.sh
```By adopting the One-stage Adaptation method, you will observe the following loss curve:
## 🧐 Evaluation
### Automated Evaluation of Medical Response Quality
- Single-turn response evaluation using **GPT-4**:
```bash
python evaluation/eval_huatuo_inst.py
```- Multi-turn dialogue evaluation using **GPT-4**:
```bash
python evaluation/eval_huatuo_conv.py
```### The Fresh Medical Exams
Access our newest medical exam dataset via the link provided. The dataset includes complete exam questions, with exam dates noted to alert for potential leaks. We plan to release more updated exams in the future.
| Examination | #Question | Exam Time | Links |
| ------------------------------------------------------------ | --------- | ---------- | ------------------------------------------------------------ |
| 2023 Chinese National Pharmacist Licensure Examination (Pharmacy) | 480 | 2023.10.22 | [huggingface](https://huggingface.co/datasets/FreedomIntelligence/2023_Pharmacist_Licensure_Examination-Pharmacy_track) |
| 2023 Chinese National Pharmacist Licensure Examination (TCM) | 480 | 2023.10.22 | [huggingface](https://huggingface.co/datasets/FreedomIntelligence/2023_Pharmacist_Licensure_Examination-TCM_track) |
| Other **Fresh** Medical Examinations is in coming | | | |## 🩺 HuatuoGPT Series
The HuatuoGPT series has so far launched two generations:
- [**HuatuoGPT**](https://github.com/FreedomIntelligence/HuatuoGPT): A Doctor-like Medical Large Language Model
- [**HuatuoGPT-II**](https://github.com/FreedomIntelligence/HuatuoGPT-II): An Domain-enhanced Medical Large Language ModelIn the future, we will continue to release new versions of HuatuoGPT. Our goal is to enhance the capabilities of LLM in the Chinese medical field and to adhere to open-source principles (aligned with the ethos of FreedomIntelligence). We hope to work together with everyone to promote the development of medical LLM!
We are from the School of Data Science, the Chinese University of Hong Kong, Shenzhen (CUHKSZ) and the Shenzhen Research Institute of Big Data (SRIBD).
## Citation
```
@misc{chen2023huatuogptii,
title={HuatuoGPT-II, One-stage Training for Medical Adaption of LLMs},
author={Junying Chen and Xidong Wang and Anningzhe Gao and Feng Jiang and Shunian Chen and Hongbo Zhang and Dingjie Song and Wenya Xie and Chuyi Kong and Jianquan Li and Xiang Wan and Haizhou Li and Benyou Wang},
year={2023},
eprint={2311.09774},
archivePrefix={arXiv},
primaryClass={cs.CL}
}@article{huatuogpt-2023,
title={HuatuoGPT, Towards Taming Language Models To Be a Doctor},
author={Hongbo Zhang and Junying Chen and Feng Jiang and Fei Yu and Zhihong Chen and Jianquan Li and Guiming Chen and Xiangbo Wu and Zhiyi Zhang and Qingying Xiao and Xiang Wan and Benyou Wang and Haizhou Li},
journal={arXiv preprint arXiv:2305.15075},
year={2023}
}
```## Star History