Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/instadeepai/DebateLLM
Benchmarking Multi-Agent Debate between Language Models for Truthfulness in Q&A.
https://github.com/instadeepai/DebateLLM
Last synced: 3 days ago
JSON representation
Benchmarking Multi-Agent Debate between Language Models for Truthfulness in Q&A.
- Host: GitHub
- URL: https://github.com/instadeepai/DebateLLM
- Owner: instadeepai
- License: apache-2.0
- Created: 2024-03-13T15:32:13.000Z (10 months ago)
- Default Branch: main
- Last Pushed: 2024-05-22T13:17:19.000Z (8 months ago)
- Last Synced: 2024-05-22T13:58:00.207Z (8 months ago)
- Language: Jupyter Notebook
- Homepage:
- Size: 3.04 MB
- Stars: 3
- Watchers: 3
- Forks: 2
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- Contributing: docs/CONTRIBUTING.md
- License: LICENSE
Awesome Lists containing this project
- awesome_ai_agents - Debatellm - Benchmarking Multi-Agent Debate between Language Models for Truthfulness in Q&A. (Building / Benchmarks)
- awesome_ai_agents - Debatellm - Benchmarking Multi-Agent Debate between Language Models for Truthfulness in Q&A. (Building / Benchmarks)
README
# 💬 DebateLLM - debating LLMs for truth discovery in medicine and beyond
## 👀 Overview
DebateLLM is a library that encompasses a variety of debating protocols and prompting strategies, aimed at enhancing the accuracy of Large Language Models (LLMs) in Q&A datasets.
Our [research](https://arxiv.org/abs/2311.17371) (mostly using GPT-3.5) reveals that no single debate or prompting strategy consistently outperforms others across all scenarios. Therefore it is important to experiment with various approaches to find what works best for each dataset. However, implementing each protocol can be time-consuming. We therefore built and open-sourced DebateLLM to facilitate its use by the research community. It enables researchers to test various implementations from the literature on their specific problems (medical or otherwise), potentially driving further advancements in the intelligent prompting of LLMs.
We have various system implementations:
Society of Minds
Medprompt
Multi-Persona
Ensemble Refinement
ChatEval
Solo Performance Prompting
## 🔧 Installation
To set up the DebateLLM environment, execute the following command:
```bash
make build_venv
```## 🚀 Running an Experiment
To run an experiment:
1. Activate the Python virtual environment:
```bash
source venv/bin/activate
```
2. Execute the evaluation script:
```bash
python ./experiments/evaluate.py
```You can modify experiment parameters by using Hydra configs located in the `conf` folder. The main configuration file is found under `conf/config.yaml`. Changes at the database and system levels can be made by updating the configs in `conf/dataset` and `conf/system`.
To launch multiple experiments:
```bash
python ./scripts/launch_experiments.py
```## 📊 Visualising Results
To visualize the results with Neptune:
1. Run the visualisation script:
```bash
python ./scripts/visualise_results.py
```
2. The output results will be saved to ./data/charts/.## 📊 Benchmarks
Our benchmarks showcase DebateLLM's performance on MedQA, PubMedQA, and MMLU datasets, focusing on accuracy versus cost, time efficiency, token economy, and agent agreement impact. For all our experiments we use GPT 3.5, unless specified otherwise. These visualizations illustrate the balance between accuracy and computational cost, the speed and quality of responses, linguistic efficiency, and the effects of consensus strategies in medical Q&A contexts. Each dataset highlights the varied capabilities of DebateLLM's strategies.
#### MedQA Dataset
### PubMedQA Dataset
### MMLU Dataset
### Agent Agreement Analysis
Modulating the agreement intensity provides a substantial improvement in performance for various models. For Multi-Persona, there is an approximate 15% improvement, and for Society of Minds (SoM), an approximate 5% improvement on the USMLE dataset. The 90% agreement intensity prompts applied to Multi-Persona demonstrate a new high score on the MedQA dataset, highlighted in the MedQA dataset cost plot as a red cross.
The benchmarks indicate the effectiveness of various strategies and models implemented within DebateLLM. For detailed analysis and discussion, refer to our [paper](https://arxiv.org/abs/2311.17371).
### GPT4 results
We also assessed GPT-4's capability on the MedQA dataset, applying the optimal agreement modulation value identified for Multi-Persona with GPT-3.5 on USMLE. The results suggest that these hyperparameter settings are indeed capable of transferring effectively to more advanced models. The results are shown below:
## Contributing 🤝
Please read our [contributing docs](docs/CONTRIBUTING.md) for details on how to submit pull requests, our Contributor License Agreement and community guidelines.## 📚 Citing DebateLLM
If you use DebateLLM in your work, please cite our paper:
```bibtex
@article{smit2024mad,
title={Should we be going MAD? A Look at Multi-Agent Debate Strategies for LLMs},
author={Smit, Andries and Duckworth, Paul and Grinsztajn, Nathan and Barrett, Thomas D. and Pretorius, Arnu},
journal={arXiv preprint arXiv:2311.17371},
year={2024},
url={https://arxiv.org/abs/2311.17371}
}```
Link to the paper: [Benchmarking Multi-Agent Debate between Language Models for Medical Q&A](https://arxiv.org/abs/2311.17371).