{"id":13525091,"url":"https://github.com/FudanDISC/DISC-MedLLM","last_synced_at":"2025-04-01T04:31:10.686Z","repository":{"id":191364937,"uuid":"683734361","full_name":"FudanDISC/DISC-MedLLM","owner":"FudanDISC","description":"Repository of DISC-MedLLM, it is a comprehensive solution that leverages Large Language Models (LLMs) to provide accurate and truthful medical response in end-to-end conversational healthcare services.","archived":false,"fork":false,"pushed_at":"2023-10-28T21:33:14.000Z","size":10730,"stargazers_count":483,"open_issues_count":13,"forks_count":44,"subscribers_count":2,"default_branch":"main","last_synced_at":"2024-11-02T09:33:08.288Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/FudanDISC.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-08-27T14:36:23.000Z","updated_at":"2024-10-28T02:18:56.000Z","dependencies_parsed_at":"2024-11-02T09:30:55.085Z","dependency_job_id":null,"html_url":"https://github.com/FudanDISC/DISC-MedLLM","commit_stats":null,"previous_names":["fudandisc/disc-medllm"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/FudanDISC%2FDISC-MedLLM","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/FudanDISC%2FDISC-MedLLM/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/FudanDISC%2FDISC-MedLLM/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/FudanDISC%2FDISC-MedLLM/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/FudanDISC","download_url":"https://codeload.github.com/FudanDISC/DISC-MedLLM/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":246586073,"owners_count":20801024,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-08-01T06:01:15.877Z","updated_at":"2025-04-01T04:31:09.630Z","avatar_url":"https://github.com/FudanDISC.png","language":"Python","readme":"# DISC-MedLLM\r\n\r\n\u003cdiv align=\"center\"\u003e\r\n  \r\n[![Generic badge](https://img.shields.io/badge/🤗-Huggingface%20Repo-green.svg)](https://huggingface.co/Flmc/DISC-MedLLM)\r\n[![license](https://img.shields.io/github/license/modelscope/modelscope.svg)](https://github.com/FudanDISC/DICS-MedLLM/blob/main/LICENSE)\r\n\u003cbr\u003e\r\n\u003c/div\u003e\r\n\u003cdiv align=\"center\"\u003e\r\n\r\n[Demo](http://med.fudan-disc.com) | [技术报告](https://arxiv.org/abs/2308.14346)\r\n\u003cbr\u003e\r\n中文 | [EN](https://github.com/FudanDISC/DISC-MedLLM/blob/main/README_EN.md)\r\n\u003c/div\u003e\r\n  \r\nDISC-MedLLM 是一个专门针对医疗健康对话式场景而设计的医疗领域大模型，由[复旦大学数据智能与社会计算实验室 (Fudan-DISC)](http://fudan-disc.com) 开发并开源。\r\n\r\n该项目包含下列开源资源:\r\n* [DISC-Med-SFT 数据集](https://huggingface.co/datasets/Flmc/DISC-Med-SFT) (不包括行为偏好训练数据)\r\n* DISC-MedLLM 的[模型权重](https://huggingface.co/Flmc/DISC-MedLLM)\r\n\r\n您可以通过访问这个[链接](http://med.fudan-disc.com)来试用我们的模型。\r\n\r\n## 概述\r\n\r\nDISC-MedLLM 是一个专为医疗健康对话场景而打造的领域大模型，它可以满足您的各种医疗保健需求，包括疾病问诊和治疗方案咨询等，为您提供高质量的健康支持服务。\r\n\r\nDISC-MedLLM 有效地对齐了医疗场景下的人类偏好，弥合了通用语言模型输出与真实世界医疗对话之间的差距，这一点在实验结果中有所体现。\r\n\r\n得益于我们以目标为导向的策略，以及基于真实医患对话数据和知识图谱，引入LLM in the loop 和 Human in the loop的多元数据构造机制，DISC-MedLLM 有以下几个特点：\r\n\r\n* **可靠丰富的专业知识**，我们以医学知识图谱作为信息源，通过采样三元组，并使用通用大模型的语言能力进行对话样本的构造。\r\n* **多轮对话的问询能力**，我们以真实咨询对话纪录作为信息源，使用大模型进行对话重建，构建过程中要求模型完全对齐对话中的医学信息。\r\n* **对齐人类偏好的回复**，病人希望在咨询的过程中获得更丰富的支撑信息和背景知识，但人类医生的回答往往简练；我们通过人工筛选，构建符合人类偏好的高质量的小规模行为微调样本，对齐病人的需求。\r\n\r\n\u003cimg src=\"https://github.com/FudanDISC/DISC-MedLLM/blob/main/images/data_construction.png\" alt=\"data-construction\" width=\"85%\"/\u003e\r\n\r\n## 模型效果演示\r\n### 疾病问诊\r\n\u003cimg src=\"https://github.com/FudanDISC/DISC-MedLLM/blob/main/images/consultation.gif\" alt=\"sample1\" width=\"60%\"/\u003e\r\n\r\n### 治疗方案咨询\r\n\u003cimg src=\"https://github.com/FudanDISC/DISC-MedLLM/blob/main/images/advice.gif\" alt=\"sample2\" width=\"60%\"/\u003e\r\n\r\n## 数据集\r\n\r\n为了训练 DISC-MedLLM ，我们构建了一个高质量的数据集，命名为 DISC-Med-SFT，其中包含了超过47万个衍生于现有的医疗数据集重新构建得到的样本。我们采用了目标导向的策略，通过对于精心选择的几个数据源进行重构来得到SFT数据集。这些数据的作用在于帮助模型学习医疗领域知识，将行为模式与人类偏好对齐，并对齐真实世界在线医疗对话的分布情况。\r\n\r\n\u003c!-- \u003cstyle type=\"text/css\"\u003e\r\n.tg  {border-collapse:collapse;border-spacing:0;}\r\n.tg td{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;\r\n  overflow:hidden;padding:10px 5px;word-break:normal;}\r\n.tg th{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;\r\n  font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;}\r\n.tg .tg-9wq8{border-color:inherit;text-align:center;vertical-align:middle}\r\n.tg .tg-c3ow{border-color:inherit;text-align:center;vertical-align:top}\r\n\u003c/style\u003e --\u003e\r\n\u003ctable class=\"tg\" style=\"undefined;table-layout: fixed; width: 442px\"\u003e\r\n\u003ccolgroup\u003e\r\n\u003ccol style=\"width: 204.428571px\"\u003e\r\n\u003ccol style=\"width: 135.428571px\"\u003e\r\n\u003ccol style=\"width: 102.428571px\"\u003e\r\n\u003c/colgroup\u003e\r\n\u003cthead\u003e\r\n  \u003ctr\u003e\r\n    \u003cth class=\"tg-9wq8\" rowspan=\"2\"\u003e\u003cbr\u003e数据集\u003c/th\u003e\r\n    \u003cth class=\"tg-9wq8\" rowspan=\"2\"\u003e\u003cbr\u003e数据来源\u003c/th\u003e\r\n    \u003cth class=\"tg-9wq8\" rowspan=\"2\"\u003e\u003cbr\u003e样本量\u003c/th\u003e\r\n  \u003c/tr\u003e\r\n  \u003ctr\u003e\r\n  \u003c/tr\u003e\r\n\u003c/thead\u003e\r\n\u003ctbody\u003e\r\n  \u003ctr\u003e\r\n    \u003ctd class=\"tg-9wq8\" rowspan=\"2\"\u003e重构AI医患对话\u003c/td\u003e\r\n    \u003ctd class=\"tg-9wq8\"\u003eMedDialog\u003c/td\u003e\r\n    \u003ctd class=\"tg-9wq8\"\u003e400k\u003c/td\u003e\r\n  \u003c/tr\u003e\r\n  \u003ctr\u003e\r\n    \u003ctd class=\"tg-9wq8\"\u003ecMedQA2\u003c/td\u003e\r\n    \u003ctd class=\"tg-c3ow\"\u003e20k\u003c/td\u003e\r\n  \u003c/tr\u003e\r\n  \u003ctr\u003e\r\n    \u003ctd class=\"tg-c3ow\"\u003e知识图谱问答对\u003c/td\u003e\r\n    \u003ctd class=\"tg-9wq8\"\u003eCMeKG\u003c/td\u003e\r\n    \u003ctd class=\"tg-9wq8\"\u003e50k\u003c/td\u003e\r\n  \u003c/tr\u003e\r\n  \u003ctr\u003e\r\n    \u003ctd class=\"tg-c3ow\"\u003e行为偏好数据集\u003c/td\u003e\r\n    \u003ctd class=\"tg-9wq8\"\u003e人为筛选\u003c/td\u003e\r\n    \u003ctd class=\"tg-9wq8\"\u003e2k\u003c/td\u003e\r\n  \u003c/tr\u003e\r\n  \u003ctr\u003e\r\n    \u003ctd class=\"tg-9wq8\" rowspan=\"3\"\u003e其他\u003c/td\u003e\r\n    \u003ctd class=\"tg-c3ow\"\u003eMedMCQA\u003c/td\u003e\r\n    \u003ctd class=\"tg-c3ow\"\u003e8k\u003c/td\u003e\r\n  \u003c/tr\u003e\r\n  \u003ctr\u003e\r\n    \u003ctd class=\"tg-c3ow\"\u003eMOSS-SFT\u003c/td\u003e\r\n    \u003ctd class=\"tg-c3ow\"\u003e33k\u003c/td\u003e\r\n  \u003c/tr\u003e\r\n  \u003ctr\u003e\r\n    \u003ctd class=\"tg-c3ow\"\u003eAlpaca-GPT4-zh\u003c/td\u003e\r\n    \u003ctd class=\"tg-c3ow\"\u003e1k\u003c/td\u003e\r\n  \u003c/tr\u003e\r\n\u003c/tbody\u003e\r\n\u003c/table\u003e\r\n\r\n\r\n\u003cbr\u003e\r\n\r\n\r\n### 下载\r\n\r\n我们总共发布了近47万条训练数据，其中包括重构AI医患对话和知识图谱问答对。您可以访问这个[链接](https://huggingface.co/datasets/Flmc/DISC-Med-SFT)下载数据集。\r\n\r\n\u003cbr\u003e\r\n\r\n\r\n## 部署\r\n\r\n当前版本的 DISC-MedLLM 是基于[Baichuan-13B-Base](https://github.com/baichuan-inc/Baichuan-13B)训练得到的。您可以直接从 [Hugging Face](https://huggingface.co/Flmc/DISC-MedLLM) 上下载我们的模型权重，或者根据下列代码样例中的方式自动获取。\r\n\r\n首先，您需要安装项目的依赖环境。\r\n```shell\r\npip install -r requirements.txt\r\n```\r\n\r\n### 利用Hugging Face的transformers模块来进行推理\r\n```python\r\n\u003e\u003e\u003e import torch\r\n\u003e\u003e\u003e from transformers import AutoModelForCausalLM, AutoTokenizer\r\n\u003e\u003e\u003e from transformers.generation.utils import GenerationConfig\r\n\u003e\u003e\u003e tokenizer = AutoTokenizer.from_pretrained(\"Flmc/DISC-MedLLM\", use_fast=False, trust_remote_code=True)\r\n\u003e\u003e\u003e model = AutoModelForCausalLM.from_pretrained(\"Flmc/DISC-MedLLM\", device_map=\"auto\", torch_dtype=torch.float16, trust_remote_code=True)\r\n\u003e\u003e\u003e model.generation_config = GenerationConfig.from_pretrained(\"Flmc/DISC-MedLLM\")\r\n\u003e\u003e\u003e messages = []\r\n\u003e\u003e\u003e messages.append({\"role\": \"user\", \"content\": \"我感觉自己颈椎非常不舒服，每天睡醒都会头痛\"})\r\n\u003e\u003e\u003e response = model.chat(tokenizer, messages)\r\n\u003e\u003e\u003e print(response)\r\n```\r\n\r\n### 运行命令行Demo\r\n```shell\r\npython cli_demo.py\r\n```\r\n### 运行网页版Demo\r\n```shell\r\nstreamlit run web_demo.py --server.port 8888\r\n```\r\n\r\n此外，由于目前版本的 DISC-MedLLM 是以 Baichuan-13B 作为基座的，您可以参考 [Baichuan-13B 项目](https://github.com/baichuan-inc/Baichuan-13B)的介绍来进行 int8 或 int4 量化推理部署。然而需要注意的是，使用模型量化可能会导致性能的下降。\r\n\u003cbr\u003e\r\n\r\n## 对模型进行微调\r\n您可以使用与我们的数据集结构相同的数据对我们的模型进行微调。我们的训练代码在 [Firefly](https://github.com/yangjianxin1/Firefly) 的基础上进行了修改，使用了不同的数据结构和对话格式。这里我们只提供全参数微调的代码：\r\n```shell\r\ndeepspeed --num_gpus={num_gpus} ./train/train.py --train_args_file ./train/train_args/sft.json\r\n```\r\n\u003e 请您在开始进行模型训练前检查 `sft.json` 中的设置。\r\n\r\n\u003cbr\u003e如果您想使用其他训练代码来微调我们的模型，请使用如下对话格式。\r\n```shell\r\n\u003c\\b\u003e\u003c$user_token\u003econtent\u003c$assistant_token\u003econtent\u003c\\s\u003e\u003c$user_token\u003econtent ...\r\n```\r\n我们使用的 `user_token` 和 `assistant_token` 分别为 `195` and `196`，这和 Baichuan-13B-Chat 是相同的。\r\n\r\n## 模型评测\r\n\u003c!-- We compare our model with three general-purpose LLMs and two conversational Chinese medical domain LLMs. Specifically, these are GPT-3.5 and GPT-4 from OpenAI, the aligned conversational version of our backbone model Baichuan-13B-Base, Baichuan-13B-Chat, and the open-source Chinese conversational medical model HuatuoGPT-13B (trained from Ziya-Llama-13B) and BianQue-2. Our evaluation approach encompasses two key dimensions: an assessment of conversational aptitude using GPT-4 as a reference judge, and a comprehensive benchmark evaluation. --\u003e\r\n\r\n我们从两个角度评估了模型的性能，包括在单轮QA问题中提供准确答案的能力以及在多轮对话中完成系统性问诊、解决咨询需求的能力。\r\n\r\n* 在单轮对话评测中，我们构建了一个基准测试数据集，其中包含从两个公开医疗数据集中收集的多项选择题，并评估模型回答的准确性。\r\n* 对于多轮对话评测，我们首先构建了一些高质量的诊疗对话案例，然后让 GPT-3.5 扮演这些案例中的患者角色，并与扮演医生角色的模型进行对话。我们利用 GPT-4 来评估整段每段对话的**主动性**、**准确性**, **帮助性**和**语言质量**。\r\n\r\n您可以在 `eval/` 目录下查看测试数据集、各个模型生成的对话结果以及 GPT-4 提供的打分结果。\u003cbr\u003e\r\n\r\n### 单轮QA评测\r\n我们在评测中选用了 [MLEC-QA](https://github.com/Judenpech/MLEC-QA) 和考研306（西医综合）的单项选择题。\r\n\u003c!-- The MLEC-QA contains questions from the China NMLEC, categorized into Clinic, Stomatology, Public Health, Traditional Chinese Medicine, and Integrated Traditional Chinese and Western Medicine. We selected 1,362 questions (10% of the test set) for evaluation. From Western Medicine 306, we used a combined 270 questions from 2020 and 2021. Our study involved both zero-shot and few-shot approaches, with examples from MLEC-QA's validation set and 2019 Western Medicine 306 questions for the few-shot samples. --\u003e\r\n\r\n#### Few-shot  \r\n\r\n| 模型             | MLEC-QA 临床 | MLEC-QA 中西医结合 | MLEC-QA 公共卫生 | MLEC-QA 口腔 | MLEC-QA 中医 | 考研306西医综合 | 平均 |\r\n|-------------------|----------------|-------------|----------------------|---------------------|------------|----------|---------|\r\n| GPT-3.5           | 58.63          | 45.9        | 53.51                | 51.52               | 43.47      | 44.81    | 49.64   |\r\n| Baichuan-13b-Chat| 31.25          | 37.69       | 28.65                | 27.27               | 29.77      | 24.81    | 29.91   |\r\n| Huatuo(13B)        | 31.85          | 25          | 32.43                | 32.95               | 26.54      | 24.44    | 28.87   |\r\n| DISC-MedLLM        | 44.64          | 41.42       | 41.62                | 38.26               | 39.48      | 33.33    | 39.79   |\r\n\r\n#### Zero-shot\r\n\r\n| 模型             | MLEC-QA 临床 | MLEC-QA 中西医结合 | MLEC-QA 公共卫生 | MLEC-QA 口腔 | MLEC-QA 中医 | 考研306西医综合 | 平均 |\r\n|-------------------|----------------|-------------|----------------------|---------------------|------------|----------|---------|\r\n| GPT-3.5           | 47.32          | 33.96       | 48.11                | 39.77               | 38.83      | 33.33    | 40.22   |\r\n| Baichuan-13b-Chat| 44.05          | 43.28       | 39.92                | 31.06               | 41.42      | 32.22    | 38.66   |\r\n| Huatuo(13B)        | 27.38          | 21.64       | 25.95                | 25.76               | 24.92      | 20.37    | 24.34   |\r\n| DISC-MedLLM        | 44.64          | 37.31       | 35.68                | 34.85               | 41.75      | 31.11    | 37.56   |\r\n\r\n\u003c!-- GPT-3.5 clearly outperformed others in the multiple-choice assessment, while our model achieved a strong second place in few-shot scenarios. In zero-shot scenarios, it followed closely behind Baichuan-13B-Chat, securing the third spot. These results highlight the current priority gap in performance for conversational medical models on knowledge-intensive tests like multiple-choice questions. --\u003e\r\n\r\n### 多轮对话能力评测\r\n我们的评测基于三个不同的数据集：Chinese Medical Benchmark ([CMB-Clin](https://github.com/FreedomIntelligence/CMB))、Chinese Medical Dialogue Dataset ([CMD](https://github.com/UCSD-AI4H/Medical-Dialogue-System)) 和 Chinese Medical Intent Dataset ([CMID](https://github.com/IMU-MachineLearningSXD/CMID))，其中 CMB-Clin 模拟了现实世界的问诊过程，而 CMD 和 CMID 则分别着重从科室专业性和用户意图的角度进行评估。 \u003cbr\u003e\r\n\r\n\u003c!-- Within this framework, The Evaluation of the dialogues is based on four criteria: Proactivity, Accuracy, Helpfulness, and Linguistic Quality.\r\n\r\n1. Proactivity: The doctor can proactively and clearly request the patient to provide more information about the symptoms, physical examination results, and medical history when the information is insufficient, actively guiding the patient through the consultation process. \r\n2. Accuracy: The diagnosis or advice the doctor provides is accurate and has no factual errors. Conclusions are not made arbitrarily.\r\n3. Helpfulness: The doctor's responses provide the patient with clear, instructive, and practical assistance, specifically addressing the patient's concerns.\r\n4. Linguistic Quality: The conversation is logical. The doctor correctly understands the patient's semantics, and the expression is smooth and natural. --\u003e\r\n\r\n#### CMB-clin数据集的评测结果:\r\n| **模型**              | **主动性** | **准确性** | **帮助性** | **语言质量** | **平均** |\r\n|------------------------|-----------------|--------------|-----------------|------------------------|-------------|\r\n| **GPT3.5**             | 4.30            | 4.53         | 4.55            | 5.00                   | 4.60        |\r\n| **GPT4**               | 4.15            | 4.70         | 4.75            | 4.96                   | 4.64        |\r\n| **Baichuan-13b-Caht**  | 4.30            | 4.58         | 4.73            | 4.95                   | 4.64        |\r\n| **BianQue-2**          | 3.97            | 4.36         | 4.37            | 4.81                   | 4.38        |\r\n| **Huatuo(13B)**        | 4.40            | 4.62         | 4.74            | 4.96                   | 4.68        |\r\n| **DISC-MedLLM**        | 4.64            | 4.47         | 4.66            | 4.99                   | 4.69        |\r\n\r\n#### CMD数据集的评测结果\r\n\u003cimg src=\"https://github.com/FudanDISC/DISC-MedLLM/blob/main/images/cmd.png\" alt=\"cmd\" width=\"75%\"/\u003e\r\n\r\n#### CMID数据集的评测结果\r\n\u003cimg src=\"https://github.com/FudanDISC/DISC-MedLLM/blob/main/images/cmid.png\" alt=\"cmid\" width=\"75%\"/\u003e\r\n\r\n\r\n\r\n## 致谢\r\n本项目基于如下开源项目展开，在此对相关项目和开发人员表示诚挚的感谢：\r\n\r\n- [**MedDialog**](https://github.com/UCSD-AI4H/Medical-Dialogue-System)\r\n\r\n- [**cMeKG**](https://github.com/king-yyf/CMeKG_tools)\r\n\r\n- [**cMedQA**](https://github.com/zhangsheng93/cMedQA2)\r\n\r\n- [**Baichuan-13B**](https://github.com/baichuan-inc/Baichuan-13B)\r\n\r\n- [**FireFly**](https://github.com/yangjianxin1/Firefly)\r\n\r\n同样感谢其他限于篇幅未能列举的为本项目提供了重要帮助的工作。\r\n\r\n## 声明\r\n由于语言模型固有的局限性，我们无法保证 DISC-MedLLM 模型所生成的信息的准确性或可靠性。该模型仅为个人和学术团体的研究和测试而设计。我们敦促用户以批判性的眼光对模型输出的任何信息或医疗建议进行评估，并且强烈建议不要盲目信任此类信息结果。我们不对因使用该模型所引发的任何问题、风险或不良后果承担责任。\r\n\r\n## 引用\r\n如果我们的工作有帮助到您的研究，请引用我们：\r\n```angular2\r\n@misc{bao2023discmedllm,\r\n      title={DISC-MedLLM: Bridging General Large Language Models and Real-World Medical Consultation}, \r\n      author={Zhijie Bao and Wei Chen and Shengze Xiao and Kuang Ren and Jiaao Wu and Cheng Zhong and Jiajie Peng and Xuanjing Huang and Zhongyu Wei},\r\n      year={2023},\r\n      eprint={2308.14346},\r\n      archivePrefix={arXiv},\r\n      primaryClass={cs.CL}\r\n}\r\n```\r\n\r\n## Star History\r\n\r\n\u003cpicture\u003e\r\n    \u003csource media=\"(prefers-color-scheme: dark)\" srcset=\"https://api.star-history.com/svg?repos=FudanDISC/DISC-MedLLM\u0026type=Date\u0026theme=dark\" /\u003e\r\n    \u003csource media=\"(prefers-color-scheme: light)\" srcset=\"https://api.star-history.com/svg?repos=FudanDISC/DISC-MedLLM\u0026type=Date\" /\u003e\r\n    \u003cimg alt=\"Star History Chart\" src=\"https://api.star-history.com/svg?repos=FudanDISC/DISC-MedLLM\u0026type=Date\" /\u003e\r\n\u003c/picture\u003e\r\n\r\n","funding_links":[],"categories":["🤖 模型","Models","大语言模型LLMs","A01_文本生成_文本对话","Medical LLMs \u0026 Foundation Models"],"sub_categories":["🧩 领域模型","英文","大语言对话模型及数据"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FFudanDISC%2FDISC-MedLLM","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FFudanDISC%2FDISC-MedLLM","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FFudanDISC%2FDISC-MedLLM/lists"}