Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/HowieHwong/TrustLLM
[ICML 2024] TrustLLM: Trustworthiness in Large Language Models
https://github.com/HowieHwong/TrustLLM
ai benchmark dataset evaluation large-language-models llm natural-language-processing nlp pypi-package toolkit trustworthy-ai trustworthy-machine-learning
Last synced: about 1 month ago
JSON representation
[ICML 2024] TrustLLM: Trustworthiness in Large Language Models
- Host: GitHub
- URL: https://github.com/HowieHwong/TrustLLM
- Owner: HowieHwong
- License: mit
- Created: 2023-12-23T18:32:57.000Z (12 months ago)
- Default Branch: main
- Last Pushed: 2024-09-29T02:57:52.000Z (3 months ago)
- Last Synced: 2024-11-09T03:39:16.728Z (about 1 month ago)
- Topics: ai, benchmark, dataset, evaluation, large-language-models, llm, natural-language-processing, nlp, pypi-package, toolkit, trustworthy-ai, trustworthy-machine-learning
- Language: Python
- Homepage: https://trustllmbenchmark.github.io/TrustLLM-Website/
- Size: 11.2 MB
- Stars: 465
- Watchers: 8
- Forks: 44
- Open Issues: 7
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- StarryDivineSky - HowieHwong/TrustLLM
- awesome-production-machine-learning - TrustLLM - TrustLLM is a comprehensive framework to evaluate the trustworthiness of large language models, which includes principles, surveys, and benchmarks. (Evaluation and Monitoring)
README
# Toolkit for "**TrustLLM: Trustworthiness in Large Language Models**"
[![Website](https://img.shields.io/badge/Website-%F0%9F%8C%8D-blue?style=for-the-badge&logoWidth=40)](https://trustllmbenchmark.github.io/TrustLLM-Website/)
[![Paper](https://img.shields.io/badge/Paper-%F0%9F%8E%93-lightgrey?style=for-the-badge&logoWidth=40)](https://arxiv.org/abs/2401.05561)
[![Dataset](https://img.shields.io/badge/Dataset-%F0%9F%92%BE-green?style=for-the-badge&logoWidth=40)](https://huggingface.co/datasets/TrustLLM/TrustLLM-dataset)
[![Data Map](https://img.shields.io/badge/Data%20Map-%F0%9F%8D%9F-orange?style=for-the-badge&logoWidth=40)](https://atlas.nomic.ai/map/f64e87d3-c769-4a90-b15d-9dc833acc8ba/8e9d7045-503b-4ba0-bc64-7201cb7aacee?xs=-16.14086&xf=-1.88776&ys=-7.54937&yf=3.88213)
[![Leaderboard](https://img.shields.io/badge/Leaderboard-%F0%9F%9A%80-brightgreen?style=for-the-badge&logoWidth=40)](https://trustllmbenchmark.github.io/TrustLLM-Website/leaderboard.html)
[![Toolkit Document](https://img.shields.io/badge/Toolkit%20Document-%F0%9F%93%9A-blueviolet?style=for-the-badge&logoWidth=40)](https://howiehwong.github.io/TrustLLM/)[![Downloads](https://static.pepy.tech/badge/trustllm)](https://pepy.tech/project/trustllm)
[![Downloads](https://static.pepy.tech/badge/trustllm/month)](https://pepy.tech/project/trustllm)
[![Downloads](https://static.pepy.tech/badge/trustllm/week)](https://pepy.tech/project/trustllm)
## Updates & News
- [01/09/2024] **TrustLLM** toolkit has been downloaded for 4000+ times!
- [15/07/2024] **TrustLLM** now supports [**UniGen**](https://unigen-framework.github.io/) for dynamic evaluation.
- [02/05/2024] π₯ **TrustLLM has been accepted by ICML 2024! See you in Vienna!**
- [23/04/2024] :star: Version 0.3.0: Major updates including bug fixes, enhanced evaluation, and new models added (including ChatGLM3, Llama3-8b, Llama3-70b, GLM4, Mixtral). ([See details](https://howiehwong.github.io/TrustLLM/changelog.html))
- [20/03/2024] :star: Version 0.2.4: Fixed many bugs & Support Gemini Pro API
- [01/02/2024] :page_facing_up: Version 0.2.2: See our new paper about the awareness in LLMs! ([link](https://arxiv.org/abs/2401.17882))
- [29/01/2024] :star: Version 0.2.1: trustllm toolkit now supports (1) Easy evaluation pipeline (2) LLMs in [replicate](https://replicate.com/) and [deepinfra](https://deepinfra.com/) (3) [Azure OpenAI API](https://azure.microsoft.com/en-us/products/ai-services/openai-service)
- [20/01/2024] :star: Version 0.2.0 of trustllm toolkit is released! See the [new features](https://howiehwong.github.io/TrustLLM/changelog.html#version-020).
- [12/01/2024] :surfer: The [dataset](https://huggingface.co/datasets/TrustLLM/TrustLLM-dataset), [leaderboard](https://trustllmbenchmark.github.io/TrustLLM-Website/leaderboard.html), and [evaluation toolkit](https://howiehwong.github.io/TrustLLM/) are released!## π**TL;DR**
- TrustLLM (ICML 2024) is a comprehensive framework for studying trustworthiness of large language models, which includes principles, surveys, and benchmarks.
- This code repository is designed to provide an easy toolkit for evaluating the trustworthiness of LLMs ([See our docs](https://howiehwong.github.io/TrustLLM/)).**Table of Content**
- [About TrustLLM](#-about-trustllm)
- [Before Evaluation](#before-evaluation)
- [Installation](#installation)
- [Dataset Download](#dataset-download)
- [Generation](#generation)
- [Evaluation](#evaluation)
- [Dataset & Task](#dataset--task)
- [Dataset overview:](#dataset-overview)
- [Task overview:](#task-overview)
- [Leaderboard](#leaderboard)
- [Contribution](#contribution)
- [Citation](#citation)
- [License](#license)## π **About TrustLLM**
We introduce TrustLLM, a comprehensive study of trustworthiness in LLMs, including principles for different dimensions of trustworthiness, established benchmark, evaluation, and analysis of trustworthiness for mainstream LLMs, and discussion of open challenges and future directions. Specifically, we first propose a set of principles for trustworthy LLMs that span eight different dimensions. Based on these principles, we further establish a benchmark across six dimensions including truthfulness, safety, fairness, robustness, privacy, and machine ethics.
We then present a study evaluating 16 mainstream LLMs in TrustLLM, consisting of over 30 datasets.
The [document](https://howiehwong.github.io/TrustLLM/#about) explains how to use the trustllm python package to help you assess the performance of your LLM in trustworthiness more quickly. For more details about TrustLLM, please refer to [project website](https://trustllmbenchmark.github.io/TrustLLM-Website/).
## π§Ή **Before Evaluation**
### **Installation**
**Installation via Github (recommended):**
```shell
git clone [email protected]:HowieHwong/TrustLLM.git
```**Installation via `pip`:**
```shell
pip install trustllm
```**Installation via `conda`:**
```sh
conda install -c conda-forge trustllm
```Create a new environment:
```shell
conda create --name trustllm python=3.9
```Install required packages:
```shell
cd trustllm_pkg
pip install .
```### **Dataset Download**
Download TrustLLM dataset:
```python
from trustllm.dataset_download import download_datasetdownload_dataset(save_path='save_path')
```### **Generation**
We have added generation section from [version 0.2.0](https://howiehwong.github.io/TrustLLM/changelog.html). Start your generation from [this page](https://howiehwong.github.io/TrustLLM/guides/generation_details.html). Here is an example:
```python
from trustllm.generation.generation import LLMGenerationllm_gen = LLMGeneration(
model_path="your model name",
test_type="test section",
data_path="your dataset file path",
model_name="",
online_model=False,
use_deepinfra=False,
use_replicate=False,
repetition_penalty=1.0,
num_gpus=1,
max_new_tokens=512,
debug=False,
device='cuda:0'
)llm_gen.generation_results()
```## π **Evaluation**
We have provided a toolkit that allows you to more conveniently assess the trustworthiness of large language models. Please refer to [the document](https://howiehwong.github.io/TrustLLM/) for more details. Here is an example:
```python
from trustllm.task.pipeline import run_truthfulnesstruthfulness_results = run_truthfulness(
internal_path="path_to_internal_consistency_data.json",
external_path="path_to_external_consistency_data.json",
hallucination_path="path_to_hallucination_data.json",
sycophancy_path="path_to_sycophancy_data.json",
advfact_path="path_to_advfact_data.json"
)
```## ποΈ **Dataset & Task**
### **Dataset overview:**
*β the dataset is from prior work, and β means the dataset is first proposed in our benchmark.*
| Dataset | Description | Num. | Exist? | Section |
|-----------------------|-----------------------------------------------------------------------------------------------------------------------|----------|--------|------------------------|
| SQuAD2.0 | It combines questions in SQuAD1.1 with over 50,000 unanswerable questions. | 100 | β | Misinformation |
| CODAH | It contains 28,000 commonsense questions. | 100 | β | Misinformation |
| HotpotQA | It contains 113k Wikipedia-based question-answer pairs for complex multi-hop reasoning. | 100 | β | Misinformation |
| AdversarialQA | It contains 30,000 adversarial reading comprehension question-answer pairs. | 100 | β | Misinformation |
| Climate-FEVER | It contains 7,675 climate change-related claims manually curated by human fact-checkers. | 100 | β | Misinformation |
| SciFact | It contains 1,400 expert-written scientific claims pairs with evidence abstracts. | 100 | β | Misinformation |
| COVID-Fact | It contains 4,086 real-world COVID claims. | 100 | β | Misinformation |
| HealthVer | It contains 14,330 health-related claims against scientific articles. | 100 | β | Misinformation |
| TruthfulQA | The multiple-choice questions to evaluate whether a language model is truthful in generating answers to questions. | 352 | β | Hallucination |
| HaluEval | It contains 35,000 generated and human-annotated hallucinated samples. | 300 | β | Hallucination |
| LM-exp-sycophancy | A dataset consists of human questions with one sycophancy response example and one non-sycophancy response example. | 179 | β | Sycophancy |
| Opinion pairs | It contains 120 pairs of opposite opinions. | 240, 120 | β | Sycophancy, Preference |
| WinoBias | It contains 3,160 sentences, split for development and testing, created by researchers familiar with the project. | 734 | β | Stereotype |
| StereoSet | It contains the sentences that measure model preferences across gender, race, religion, and profession. | 734 | β | Stereotype |
| Adult | The dataset, containing attributes like sex, race, age, education, work hours, and work type, is utilized to predict salary levels for individuals. | 810 | β | Disparagement |
| Jailbreak Trigger | The dataset contains the prompts based on 13 jailbreak attacks. | 1300 | β | Jailbreak, Toxicity |
| Misuse (additional) | This dataset contains prompts crafted to assess how LLMs react when confronted by attackers or malicious users seeking to exploit the model for harmful purposes. | 261 | β | Misuse |
| Do-Not-Answer | It is curated and filtered to consist only of prompts to which responsible LLMs do not answer. | 344 + 95 | β | Misuse, Stereotype |
| AdvGLUE | A multi-task dataset with different adversarial attacks. | 912 | β | Natural Noise |
| AdvInstruction | 600 instructions generated by 11 perturbation methods. | 600 | β | Natural Noise |
| ToolE | A dataset with the users' queries which may trigger LLMs to use external tools. | 241 | β | Out of Domain (OOD) |
| Flipkart | A product review dataset, collected starting from December 2022. | 400 | β | Out of Domain (OOD) |
| DDXPlus | A 2022 medical diagnosis dataset comprising synthetic data representing about 1.3 million patient cases. | 100 | β | Out of Domain (OOD) |
| ETHICS | It contains numerous morally relevant scenarios descriptions and their moral correctness. | 500 | β | Implicit Ethics |
| Social Chemistry 101 | It contains various social norms, each consisting of an action and its label. | 500 | β | Implicit Ethics |
| MoralChoice | It consists of different contexts with morally correct and wrong actions. | 668 | β | Explicit Ethics |
| ConfAIde | It contains the description of how information is used. | 196 | β | Privacy Awareness |
| Privacy Awareness | It includes different privacy information queries about various scenarios. | 280 | β | Privacy Awareness |
| Enron Email | It contains approximately 500,000 emails generated by employees of the Enron Corporation. | 400 | β | Privacy Leakage |
| Xstest | It's a test suite for identifying exaggerated safety behaviors in LLMs. | 200 | β | Exaggerated Safety |### **Task overview:**
*β means evaluation through the automatic scripts (e.g., keywords matching), β means the automatic evaluation by ChatGPT, GPT-4 or longformer, and β means the mixture evaluation.*
*More trustworthy LLMs are expected to have a higher value of the metrics with β and a lower value with β.*
| Task Name | Metrics | Type | Eval | Section |
|----------------------------------------------|-------------------------------------------|-----------------|------|--------------------------|
| Closed-book QA | Accuracy (β) | Generation | β | Misinformation(Internal) |
| Fact-Checking | Macro F-1 (β) | Classification | β | Misinformation(External) |
| Multiple Choice QA | Accuracy (β) | Classification | β | Hallucination |
| Hallucination Classification | Accuracy (β) | Classification | β | Hallucination |
| Persona Sycophancy | Embedding similarity (β) | Generation | β | Sycophancy |
| Opinion Sycophancy | Percentage change (β) | Generation | β | Sycophancy |
| Factuality Correction | Percentage change (β) | Generation | β | Adversarial Factuality |
| Jailbreak Attack Evaluation | RtA (β) | Generation | β | Jailbreak |
| Toxicity Measurement | Toxicity Value (β) | Generation | β | Toxicity |
| Misuse Evaluation | RtA (β) | Generation | β | Misuse |
| Exaggerated Safety Evaluation | RtA (β) | Generation | β | Exaggerated Safety |
| Agreement on Stereotypes | Accuracy (β) | Generation | β | Stereotype |
| Recognition of Stereotypes | Agreement Percentage (β) | Classification | β | Stereotype |
| Stereotype Query Test | RtA (β) | Generation | β | Stereotype |
| Preference Selection | RtA (β) | Generation | β | Preference |
| Salary Prediction | p-value (β) | Generation | β | Disparagement |
| Adversarial Perturbation in Downstream Tasks | ASR (β), RS (β) | Generation | β | Natural Noise |
| Adversarial Perturbation in Open-Ended Tasks | Embedding similarity (β) | Generation | β | Natural Noise |
| OOD Detection | RtA (β) | Generation | β | Out of Domain (OOD) |
| OOD Generalization | Micro F1 (β) | Classification | β | Out of Domain (OOD) |
| Agreement on Privacy Information | Pearsonβs correlation (β) | Classification | β | Privacy Awareness |
| Privacy Scenario Test | RtA (β) | Generation | β | Privacy Awareness |
| Probing Privacy Information Usage | RtA (β), Accuracy (β) | Generation | β | Privacy Leakage |
| Moral Action Judgement | Accuracy (β) | Classification | β | Implicit Ethics |
| Moral Reaction Selection (Low-Ambiguity) | Accuracy (β) | Classification | β | Explicit Ethics |
| Moral Reaction Selection (High-Ambiguity) | RtA (β) | Generation | β | Explicit Ethics |
| Emotion Classification | Accuracy (β) | Classification | β | Emotional Awareness |## π **Leaderboard**
If you want to view the performance of all models or upload the performance of your LLM, please refer to [this link](https://trustllmbenchmark.github.io/TrustLLM-Website/leaderboard.html).
![images/rank_card_00.png](images/rank_card_00.png "ranking")
## π£ **Contribution**
We welcome your contributions, including but not limited to the following:
- New evaluation datasets
- Research on trustworthy issues
- Improvements to the toolkitIf you intend to make improvements to the toolkit, please fork the repository first, make the relevant modifications to the code, and finally initiate a `pull request`.
## **β° TODO in Coming Versions**
- [x] Faster and simpler evaluation pipeline (**Version 0.2.1**)
- [x] Dynamic dataset ([UniGen](https://unigen-framework.github.io/))
- [ ] More fine-grained datasets
- [ ] Chinese output evaluation
- [ ] Downstream application evaluation## **Citation**
```text
@inproceedings{huang2024trustllm,
title={TrustLLM: Trustworthiness in Large Language Models},
author={Yue Huang and Lichao Sun and Haoran Wang and Siyuan Wu and Qihui Zhang and Yuan Li and Chujie Gao and Yixin Huang and Wenhan Lyu and Yixuan Zhang and Xiner Li and Hanchi Sun and Zhengliang Liu and Yixin Liu and Yijue Wang and Zhikun Zhang and Bertie Vidgen and Bhavya Kailkhura and Caiming Xiong and Chaowei Xiao and Chunyuan Li and Eric P. Xing and Furong Huang and Hao Liu and Heng Ji and Hongyi Wang and Huan Zhang and Huaxiu Yao and Manolis Kellis and Marinka Zitnik and Meng Jiang and Mohit Bansal and James Zou and Jian Pei and Jian Liu and Jianfeng Gao and Jiawei Han and Jieyu Zhao and Jiliang Tang and Jindong Wang and Joaquin Vanschoren and John Mitchell and Kai Shu and Kaidi Xu and Kai-Wei Chang and Lifang He and Lifu Huang and Michael Backes and Neil Zhenqiang Gong and Philip S. Yu and Pin-Yu Chen and Quanquan Gu and Ran Xu and Rex Ying and Shuiwang Ji and Suman Jana and Tianlong Chen and Tianming Liu and Tianyi Zhou and William Yang Wang and Xiang Li and Xiangliang Zhang and Xiao Wang and Xing Xie and Xun Chen and Xuyu Wang and Yan Liu and Yanfang Ye and Yinzhi Cao and Yong Chen and Yue Zhao},
booktitle={Forty-first International Conference on Machine Learning},
year={2024},
url={https://openreview.net/forum?id=bWUU0LwwMp}
}
```[//]: # (## Star History)
[//]: # ()
[//]: # ([![Star History Chart](https://api.star-history.com/svg?repos=HowieHwong/TrustLLM&type=Date)](https://star-history.com/#HowieHwong/TrustLLM&Date))## **License**
The code in this repository is open source under the [MIT license](https://github.com/HowieHwong/TrustLLM/blob/main/LICENSE).