Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/PKU-Alignment/safe-rlhf
Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
https://github.com/PKU-Alignment/safe-rlhf
ai-safety alpaca beaver datasets deepspeed gpt large-language-models llama llm llms reinforcement-learning reinforcement-learning-from-human-feedback rlhf safe-reinforcement-learning safe-reinforcement-learning-from-human-feedback safe-rlhf safety transformer transformers vicuna
Last synced: about 1 month ago
JSON representation
Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
- Host: GitHub
- URL: https://github.com/PKU-Alignment/safe-rlhf
- Owner: PKU-Alignment
- License: apache-2.0
- Created: 2023-05-15T11:47:08.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-06-13T10:06:45.000Z (6 months ago)
- Last Synced: 2024-10-29T17:12:22.566Z (about 2 months ago)
- Topics: ai-safety, alpaca, beaver, datasets, deepspeed, gpt, large-language-models, llama, llm, llms, reinforcement-learning, reinforcement-learning-from-human-feedback, rlhf, safe-reinforcement-learning, safe-reinforcement-learning-from-human-feedback, safe-rlhf, safety, transformer, transformers, vicuna
- Language: Python
- Homepage: https://pku-beaver.github.io
- Size: 3.99 MB
- Stars: 1,331
- Watchers: 18
- Forks: 118
- Open Issues: 13
-
Metadata Files:
- Readme: README.md
- License: LICENSE
- Code of conduct: CODE_OF_CONDUCT.md
Awesome Lists containing this project
- StarryDivineSky - PKU-Alignment/safe-rlhf - Alignment 团队开发的高度模块化开源 RLHF 框架。它旨在为比对研究提供训练数据和可重复的代码管道,特别是通过安全 RLHF 方法进行的约束比对LLM研究。特点是:支持SFT、RLHF和Safe RLHF训练,适用于流行的预训练模型:LLaMA、OPT、百川等。提供大型人工标记数据集(最多 1M 对),包括有用和无害的偏好,以支持可重复的 RLHF 研究。支持奖励模型和成本模型的训练,并提供预先训练的检查点。支持 SFT 和 RLHF 的自定义参数和数据集。为安全约束验证提供多尺度指标,例如 BIG-bench、GPT-4 评估。 (A01_文本生成_文本对话 / 大语言对话模型及数据)
README
Constrained Value-Aligned LLM via Safe RLHF
Beaver is a highly modular open-source RLHF framework developed by the PKU-Alignment team at Peking University.
It aims to provide training data and a reproducible code pipeline for alignment research, especially constrained alignment LLM research via Safe RLHF methods.The key features of Beaver are:
- Support **SFT**, **RLHF** and **Safe RLHF** training for popular pre-trained models: [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai), [OPT](https://arxiv.org/abs/2205.01068), [Baichuan](https://huggingface.co/baichuan-inc/Baichuan-7B), etc.
- Provide a large human-labeled dataset **(up to 1M pairs)** including both helpful and harmless preferences to support reproducible RLHF research.
- Support training for Reward Model & Cost Model, and provide pre-trained checkpoints.
- Support customized parameters and datasets for SFT and RLHF.
- Provide multi-scale metrics for safety constraints verification, e.g., BIG-bench, GPT-4 Evaluation.## **🦫 What's New?**
- **🎉 `2024/06/13`:** We are pleased to announce the open-sourcing of our PKU-SafeRLHF dataset version 1.0. This release advances over the initial beta version by incorporating human-AI joint annotations, expanding the scope of harm categories, and introducing detailed severity level labels. For further details and access, please visit our dataset page on 🤗 Hugging Face: [PKU-Alignment/PKU-SafeRLHF](https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF).
- **🎉 `2024/01/16`:** Our method [**Safe RLHF**](https://openreview.net/forum?id=TyFrPOKYXw) has been accepted by ICLR 2024 Spotlight.
- **📄 `2023/10/19`:** We've released our [**Safe RLHF paper**](https://arxiv.org/abs/2310.12773) on arXiv, detailing our new safe alignment algorithm and its implementation.
- **🚀 `2023/07/10`:** We're delighted to announce the open-sourcing of **Beaver-7B** [v1](https://huggingface.co/PKU-Alignment/beaver-7b-v1.0) / [v2](https://huggingface.co/PKU-Alignment/beaver-7b-v2.0) / [v3](https://huggingface.co/PKU-Alignment/beaver-7b-v3.0) models as the first milestone of the Safe RLHF training series, complemented by the corresponding **Reward Models** [v1](https://huggingface.co/PKU-Alignment/beaver-7b-v1.0-reward) / [v2](https://huggingface.co/PKU-Alignment/beaver-7b-v2.0-reward) / [v3](https://huggingface.co/PKU-Alignment/beaver-7b-v3.0-reward) / [unified](https://huggingface.co/PKU-Alignment/beaver-7b-unified-reward) and **Cost Models** [v1](https://huggingface.co/PKU-Alignment/beaver-7b-v1.0-cost) / [v2](https://huggingface.co/PKU-Alignment/beaver-7b-v2.0-cost) / [v3](https://huggingface.co/PKU-Alignment/beaver-7b-v3.0-cost) / [unified](https://huggingface.co/PKU-Alignment/beaver-7b-unified-cost) checkpoints on 🤗 Hugging Face.
- **🔥 `2023/07/10`:** We extend the open-source safety preference dataset, [**PKU-Alignment/PKU-SafeRLHF**](https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF), which now contains over 300k examples. (See also section [PKU-SafeRLHF-Dataset](#pku-saferlhf-dataset))
- **⚙ `2023/07/05`:** We enhanced our support for Chinese pre-training models and incorporated additional open-source Chinese datasets. (See also sections [Chinese Support (中文支持)](#chinese-support-中文支持) and [Custom Datasets (自定义数据集)](#custom-datasets))
- **⭐️ `2023/05/15`:** First release of the Safe RLHF pipeline, evaluation results, and training code.### Table of Contents
- [Constrained Value Alignment via Safe RLHF](#constrained-value-alignment-via-safe-rlhf)
- [Comparison with Other RLHF Libraries](#comparison-with-other-rlhf-libraries)
- [PKU-SafeRLHF-Dataset](#pku-saferlhf-dataset)
- [PKU-SafeRLHF-10K](#pku-saferlhf-10k)
- [PKU-SafeRLHF-1M](#pku-saferlhf-1m)
- [Why "Beaver"](#why-beaver)
- [Beaver vs. Alpaca](#beaver-vs-alpaca)
- [Installation](#installation)
- [Training](#training)
- [Custom Datasets](#custom-datasets)
- [Inference](#inference)
- [Interactive CLI Demo](#interactive-cli-demo)
- [Interactive Arena](#interactive-arena)
- [Chinese Support (中文支持)](#chinese-support-中文支持)
- [Benchmark and Evaluation](#benchmark-and-evaluation)
- [Arena via Reward and Cost Models](#arena-via-reward-and-cost-models)
- [BIG-bench](#big-bench)
- [GPT-4 Evaluation](#gpt-4-evaluation)
- [Future Plans](#future-plans)
- [Citation](#citation)
- [PKU-Alignment Team](#pku-alignment-team)
- [License](#license)## Constrained Value Alignment via Safe RLHF
Reinforcement Learning from Human Feedback: reward maximization via preference learning
Safe Reinforcement Learning from Human Feedback: constrained reward maximization via preference learning
where $R (\cdot)$ and $C (\cdot)$ are reward and cost functions respectively. They are neural networks known as human proxies trained on human preferences.
The ultimate goal is to find a model $\pi_{\theta}$ that is both helpful (high reward) and harmless (low cost).
## Comparison with Other RLHF Libraries
Compare with other frameworks supporting RLHF, `safe-rlhf` is the first framework to support all stages from SFT to RLHF and Evaluation.
In addition, `safe-rlhf` is the first framework that takes safety preference under consideration during the RLHF stage.
It holds a more theoretical guarantee for constrained parameter searching in the policy space.| | SFT | Preference Model[1](#perference-model) Training | RLHF | Safe RLHF | PTX Loss | Evaluation | Backend |
| :----------------------------------------------------------------------------------------------------: | :--------------------------: | :--------------------------------------------------------: | :---: | :-------: | :------: | :--------: | :---------------: |
| [Beaver](https://github.com/PKU-Alignment/safe-rlhf)(Safe-RLHF) | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | DeepSpeed |
| [trlX](https://github.com/CarperAI/trlx) | ✔️ | ❌[2](#trlx-rm-example) | ✔️ | ❌ | ❌ | ❌ | Accelerate / NeMo |
| [DeepSpeed-Chat](https://github.com/microsoft/DeepSpeedExamples/tree/HEAD/applications/DeepSpeed-Chat) | ✔️ | ✔️ | ✔️ | ❌ | ✔️ | ❌ | DeepSpeed |
| [Colossal-AI](https://github.com/hpcaitech/ColossalAI) | ✔️ | ✔️ | ✔️ | ❌ | ✔️ | ❌ | ColossalAI |
| [AlpacaFarm](https://github.com/tatsu-lab/alpaca_farm) | ❌[3](#alpaca-sft) | ✔️ | ✔️ | ❌ | ❌ | ✔️ | Accelerate |
1. In the context of RLHF, the "Preference Model" is identified as the "Reward Model". And the "Preference Model" refers to both the "Reward Model" and the "Cost Model" in Safe RLHF.
2. There is an example for reward model training in the examples directory in the trlX repository. However it is not officially supported and is not integrated into the trlX library.
3. The supervised fine-tuning support for Alpaca is provided in the tatsu-lab/stanford_alpaca repository.## PKU-SafeRLHF-Dataset
The `PKU-SafeRLHF` dataset is a human-labeled dataset containing both performance and safety preferences.
It includes constraints in over ten dimensions, such as insults, immorality, crime, emotional harm, and privacy, among others.
These constraints are designed for fine-grained value alignment in RLHF technology.To facilitate multi-round fine-tuning, we will release the initial parameter weights, required datasets, and training parameters for each round.
This ensures reproducibility in scientific and academic research.
The dataset will be released gradually through rolling updates.The dataset is available on Hugging Face: [PKU-Alignment/PKU-SafeRLHF](https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF).
### PKU-SafeRLHF-10K
`PKU-SafeRLHF-10K` is a subset of `PKU-SafeRLHF` that contains the first round of Safe RLHF training data with 10K instances, including safety preferences.
You can find it on Hugging Face: [PKU-Alignment/PKU-SafeRLHF-10K](https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF-10K).### PKU-SafeRLHF-1M
We will gradually release the full Safe-RLHF datasets, which include **1M _human-labeled_ pairs** for both helpful and harmless preferences.
## Why "Beaver"
Beaver is a large language model based on [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai), trained using `safe-rlhf`.
It is developed upon the foundation of the [Alpaca](https://crfm.stanford.edu/2023/03/13/alpaca.html) model, by collecting human preference data related to helpfulness and harmlessness and employing the Safe RLHF technique for training.
While maintaining the helpful performance of Alpaca, Beaver significantly improves its harmlessness.> Beavers are known as the "natural dam engineers" as they are adept at using branches, shrubs, rocks, and soil to build dams and small wooden houses, creating wetland environments suitable for other creatures to inhabit, making them an indispensable part of the ecosystem. To ensure the safety and reliability of Large Language Models (LLMs) while accommodating a wide range of values across different populations, the Peking University team has named their open-source model "Beaver" and aims to build a dam for LLMs through the Constrained Value Alignment (CVA) technology. This technology enables fine-grained labeling of information and, combined with secure reinforcement learning methods, significantly reduces model bias and discrimination, thereby enhancing the model's safety. Analogous to the role of beavers in the ecosystem, the Beaver model will provide crucial support for the development of large language models and make positive contributions to the sustainable development of artificial intelligence technology.
## Beaver vs. Alpaca
Following the evaluation methodology of the [Vicuna](https://lmsys.org/blog/2023-03-30-vicuna) model, we utilized GPT-4 to evaluate Beaver. The results indicate that, compared to Alpaca, Beaver exhibits significant improvements in multiple dimensions related to safety.
![Arena-Demo](images/arena-demo.gif)
Significant distribution shift for safety preferences after utilizing the Safe RLHF pipeline on the Alpaca-7B model.
## Installation
Clone the source code from GitHub:
```bash
git clone https://github.com/PKU-Alignment/safe-rlhf.git
cd safe-rlhf
```**Native Runner:** Setup a conda environment using [`conda`](https://github.com/conda/conda) / [`mamba`](https://github.com/mamba-org/mamba):
```bash
conda env create --file conda-recipe.yaml # or `mamba env create --file conda-recipe.yaml`
```This will automatically setup all dependencies.
**Containerized Runner:** Other than using the native machine with conda isolation, as an alternative, you can also use docker images to configure the environment.
Firstly, please follow [NVIDIA Container Toolkit: Installation Guide](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html) and [NVIDIA Docker: Installation Guide](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker) to setup `nvidia-docker`.
Then you can run:```bash
make docker-run
```This command will build and start a docker container installed with proper dependencies.
The host path `/` will be mapped to `/host` and the current working directory will be mapped to `/workspace` inside the container.## Training
`safe-rlhf` supports a complete pipeline from Supervised Fine-Tuning (SFT) to preference model training to RLHF alignment training.
0. Follow the instructions in section [Installation](#installation) to setup the training environment properly.
```bash
conda activate safe-rlhf
export WANDB_API_KEY="..." # your W&B API key here
```or
```bash
make docker-run
export WANDB_API_KEY="..." # your W&B API key here
```1. Supervised Fine-Tuning (SFT)
```bash
bash scripts/sft.sh \
--model_name_or_path \
--output_dir output/sft
```NOTE: You may need to update some of the parameters in the script according to your machine setup, such as the number of GPUs for training, the training batch size, etc.
2. Value Models (reward model & cost model)
```bash
bash scripts/reward-model.sh \
--model_name_or_path output/sft \
--output_dir output/rm
``````bash
bash scripts/cost-model.sh \
--model_name_or_path output/sft \
--output_dir output/cm
```3. RLHF (Optional)
```bash
bash scripts/ppo.sh \
--actor_model_name_or_path output/sft \
--reward_model_name_or_path output/rm \
--output_dir output/ppo
```4. Safe-RLHF
```bash
bash scripts/ppo-lag.sh \
--actor_model_name_or_path output/sft \
--reward_model_name_or_path output/rm \
--cost_model_name_or_path output/cm \
--output_dir output/ppo-lag
```An example of commands to run the whole pipeline with [LLaMA-7B](https://ai.facebook.com/blog/large-language-model-llama-meta-ai):
```bash
conda activate safe-rlhf
bash scripts/sft.sh --model_name_or_path ~/models/llama-7b --output_dir output/sft
bash scripts/reward-model.sh --model_name_or_path output/sft --output_dir output/rm
bash scripts/cost-model.sh --model_name_or_path output/sft --output_dir output/cm
bash scripts/ppo-lag.sh \
--actor_model_name_or_path output/sft \
--reward_model_name_or_path output/rm \
--cost_model_name_or_path output/cm \
--output_dir output/ppo-lag
```#### Computational Requirements
All training processes listed above are tested with [LLaMA-7B](https://ai.facebook.com/blog/large-language-model-llama-meta-ai) on a cloud server with 8 x NVIDIA A800-80GB GPUs.
Users, who do not have enough GPU memory resources, can enable [DeepSpeed ZeRO-Offload](https://www.deepspeed.ai/tutorials/zero-offload) to alleviate the peak GPU memory usage.
All training scripts can pass with an extra option `--offload` (defaults to `none`, i.e., disable ZeRO-Offload) to offload the tensors (parameters and/or optimizer states) to CPU. For example:
```bash
bash scripts/sft.sh \
--model_name_or_path ~/models/llama-7b \
--output_dir output/sft \
--offload all # or `parameter` or `optimizer`
```For multi-node settings, users can refer to the [DeepSpeed: Resource Configuration (multi-node)](https://www.deepspeed.ai/getting-started/#resource-configuration-multi-node) documentation for more details. Here is an example to start the training process on 4 nodes (each has 8 GPUs):
```text
# myhostfile
worker-1 slots=8
worker-2 slots=8
worker-3 slots=8
worker-4 slots=8
```Then launch the training scripts with:
```bash
bash scripts/sft.sh \
--hostfile myhostfile \
--model_name_or_path ~/models/llama-7b \
--output_dir output/sft
```## Custom Datasets
`safe-rlhf` provides an abstraction to create datasets for all of the Supervised Fine-Tuning, preference model training, and RL training stages.
```python
class RawSample(TypedDict, total=False):
"""Raw sample type.For SupervisedDataset, should provide (input, answer) or (dialogue).
For PreferenceDataset, should provide (input, answer, other_answer, better).
For SafetyPreferenceDataset, should provide (input, answer, other_answer, safer, is_safe, is_other_safe).
For PromptOnlyDataset, should provide (input).
"""# Texts
input: NotRequired[str] # either `input` or `dialogue` should be provided
"""User input text."""
answer: NotRequired[str]
"""Assistant answer text."""
other_answer: NotRequired[str]
"""Other assistant answer text via resampling."""
dialogue: NotRequired[list[str]] # either `input` or `dialogue` should be provided
"""Dialogue history."""# Flags
better: NotRequired[bool]
"""Whether ``answer`` is better than ``other_answer``."""
safer: NotRequired[bool]
"""Whether ``answer`` is safer than ``other_answer``."""
is_safe: NotRequired[bool]
"""Whether ``answer`` is safe."""
is_other_safe: NotRequired[bool]
"""Whether ``other_answer`` is safe."""
```Here is an example to implement a custom dataset (see [safe_rlhf/datasets/raw](safe_rlhf/datasets/raw) for more examples):
```python
import argparse
from datasets import load_dataset
from safe_rlhf.datasets import RawDataset, RawSample, parse_datasetclass MyRawDataset(RawDataset):
NAME = 'my-dataset-name'def __init__(self, path=None) -> None:
# Load a dataset from Hugging Face
self.data = load_dataset(path or 'my-organization/my-dataset')['train']def __getitem__(self, index: int) -> RawSample:
data = self.data[index]
# Construct a `RawSample` dictionary from your custom dataset item
return RawSample(
input=data['col1'],
answer=data['col2'],
other_answer=data['col3'],
better=float(data['col4']) > float(data['col5']),
...
)def __len__(self) -> int:
return len(self.data) # dataset sizedef parse_arguments():
parser = argparse.ArgumentParser(...)
parser.add_argument(
'--datasets',
type=parse_dataset,
nargs='+',
metavar='DATASET[:PROPORTION[:PATH]]',
)
...
return parser.parse_args()def main():
args = parse_arguments()
...if __name__ == '__main__':
main()
```Then you can pass this dataset to the training scripts as:
```bash
python3 train.py --datasets my-dataset-name
```You may also pass multiple datasets with optionally additional dataset proportions (separated by a colon `:`). For example:
```bash
python3 train.py --datasets alpaca:0.75 my-dataset-name:0.5
```This will use randomly split 75% of the Stanford Alpaca dataset and 50% of your custom dataset.
In addition, the dataset argument can also be followed by a local path (separated by a colon `:`) if you have already cloned the dataset repository from Hugging Face.
```bash
git lfs install
git clone https://huggingface.co/datasets/my-organization/my-dataset ~/path/to/my-dataset/repository
python3 train.py --datasets alpaca:0.75 my-dataset-name:0.5:~/path/to/my-dataset/repository
```NOTE: The dataset class must be imported before the training script begins to parse the command line arguments.
## Inference
### Interactive CLI Demo
```bash
python3 -m safe_rlhf.serve.cli --model_name_or_path output/sft # or output/ppo-lag
```### Interactive Arena
```bash
python3 -m safe_rlhf.serve.arena --red_corner_model_name_or_path output/sft --blue_corner_model_name_or_path output/ppo-lag
```![Arena-Demo](images/arena-demo.gif)
## Chinese Support (中文支持)
The Safe-RLHF pipeline supports not only the LLaMA model family but also other pre-trained models such as [Baichuan](https://huggingface.co/baichuan-inc/Baichuan-7B), [InternLM](https://huggingface.co/internlm/internlm-7b), etc. that offer better support for Chinese. You just need to update the path to the pre-trained model in the training and inference code.
> Safe-RLHF 管道不仅仅支持 LLaMA 系列模型,它也支持其他一些对中文支持更好的预训练模型,例如 [Baichuan](https://huggingface.co/baichuan-inc/Baichuan-7B) 和 [InternLM](https://huggingface.co/internlm/internlm-7b) 等。你只需要在训练和推理的代码中更新预训练模型的路径即可。
```bash
# SFT training
bash scripts/sft.sh --model_name_or_path baichuan-inc/Baichuan-7B --output_dir output/baichuan-sft# Inference
python3 -m safe_rlhf.serve.cli --model_name_or_path output/baichuan-sft
```
In the meantime, we've added support for Chinese datasets such as the [Firefly](https://huggingface.co/datasets/YeungNLP/firefly-train-1.1M) and [MOSS series](https://huggingface.co/datasets/fnlp/moss-003-sft-data) to our [raw-datasets](safe_rlhf/datasets/raw). You only need to change the dataset path in the training code to use the corresponding dataset for fine-tuning the Chinese pre-training model:
> 同时,我们也在 [raw-datasets](safe_rlhf/datasets/raw) 中增加了支持一些中文数据集,例如 [Firefly](https://huggingface.co/datasets/YeungNLP/firefly-train-1.1M) 和 [MOSS 系列](https://huggingface.co/datasets/fnlp/moss-003-sft-data)等。在训练代码中更改数据集路径,你就可以使用相应的数据集来微调中文预训练模型:
```diff
# scripts/sft.sh
- --train_datasets alpaca \
+ --train_datasets firefly \
```For instructions on how to add custom datasets, please refer to section [Custom Datasets](#custom-datasets).
> 关于如何添加自定义数据集的方法,请参阅章节 [Custom Datasets (自定义数据集)](#custom-datasets)。
## Benchmark and Evaluation
### Arena via Reward and Cost Models
```bash
scripts/arena-evaluation.sh \
--red_corner_model_name_or_path output/sft \
--blue_corner_model_name_or_path output/ppo-lag \
--reward_model_name_or_path output/rm \
--cost_model_name_or_path output/cm \
--output_dir output/arena-evaluation
```### BIG-bench
```bash
# Install BIG-bench
git clone https://github.com/google/BIG-bench.git
(
cd BIG-bench
python3 setup.py sdist
python3 -m pip install -e .
)# BIG-bench evaluation
python3 -m safe_rlhf.evaluate.bigbench \
--model_name_or_path output/ppo-lag \
--task_name
```### GPT-4 Evaluation
```bash
# Install OpenAI Python API
pip3 install openai
export OPENAI_API_KEY="..." # your OpenAI API key here# GPT-4 evaluation
python3 -m safe_rlhf.evaluate.gpt4 \
--red_corner_model_name_or_path output/sft \
--blue_corner_model_name_or_path output/ppo-lag
```## Future Plans
- [X] Beaver-7B checkpoint is released on Hugging Face.
- [X] Release Safe RLHF paper preprint.
- [ ] We will gradually release the full Safe-RLHF datasets.
- [ ] Train Larger LLM with Safe-RLHF.
- [ ] Support memory-efficient training, such as LoRA, PEFT, etc.## Citation
If you find Safe-RLHF useful or use Safe-RLHF (model, code, dataset, etc.) in your research, please consider citing the following work in your publications.
```bibtex
@inproceedings{safe-rlhf,
title={Safe RLHF: Safe Reinforcement Learning from Human Feedback},
author={Josef Dai and Xuehai Pan and Ruiyang Sun and Jiaming Ji and Xinbo Xu and Mickel Liu and Yizhou Wang and Yaodong Yang},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=TyFrPOKYXw}
}
@inproceedings{beavertails,
title={BeaverTails: Towards Improved Safety Alignment of {LLM} via a Human-Preference Dataset},
author={Jiaming Ji and Mickel Liu and Juntao Dai and Xuehai Pan and Chi Zhang and Ce Bian and Boyuan Chen and Ruiyang Sun and Yizhou Wang and Yaodong Yang},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=g0QovXbFw3}
}
```## PKU-Alignment Team
All students below contributed equally and the order is determined alphabetically:
- [Juntao Dai](https://github.com/calico-1226)
- [Jiaming Ji](https://github.com/zmsn-2077)
- [Xuehai Pan](https://github.com/XuehaiPan)
- [Ruiyang Sun](https://github.com/rockmagma02)All advised by [Yizhou Wang](https://cfcs.pku.edu.cn/english/people/faculty/yizhouwang/index.htm) and [Yaodong Yang](https://www.yangyaodong.com/).
Acknowledge: We appreciate [Ms. Yi Qu](https://www.xiaohongshu.com/user/profile/58ee23c96a6a695050dcf276) for designing the Beaver logo.### Acknowledgment
This repository benefits from [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai), [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca), [DeepSpeed](https://github.com/microsoft/DeepSpeed), and [DeepSpeed-Chat](https://github.com/microsoft/DeepSpeedExamples/tree/HEAD/applications/DeepSpeed-Chat).
Thanks for their wonderful works and their efforts for democratizing the LLM research.
Safe-RLHF and its related assets are built and open-sourced with love 🤗❤️.This work is supported and funded by the Peking University.
## License
Safe-RLHF is released under Apache License 2.0.