Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/ThuCCSLab/Awesome-LM-SSP
A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).
https://github.com/ThuCCSLab/Awesome-LM-SSP
List: Awesome-LM-SSP
adversarial-attacks awesome-list diffusion-models jailbreak language-model llm nlp privacy safety security vlm
Last synced: 4 days ago
JSON representation
A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).
- Host: GitHub
- URL: https://github.com/ThuCCSLab/Awesome-LM-SSP
- Owner: ThuCCSLab
- License: apache-2.0
- Created: 2024-01-09T04:17:50.000Z (10 months ago)
- Default Branch: main
- Last Pushed: 2024-10-24T08:35:25.000Z (16 days ago)
- Last Synced: 2024-10-25T00:16:27.455Z (15 days ago)
- Topics: adversarial-attacks, awesome-list, diffusion-models, jailbreak, language-model, llm, nlp, privacy, safety, security, vlm
- Homepage: https://github.com/ThuCCSLab/Awesome-LM-SSP
- Size: 2.14 MB
- Stars: 871
- Watchers: 23
- Forks: 54
- Open Issues: 2
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- Awesome-Jailbreak-on-LLMs - Awesome-LM-SSP
- Awesome-LLM-Safety - link
- awesome_LLM-harmful-fine-tuning-papers - Awesome LLM-SSP
README
# Awesome-LM-SSP
[![Awesome](https://awesome.re/badge.svg)](https://awesome.re)
[![Page Views](https://badges.toozhao.com/badges/01HMRJE3211AJ2QD2X9AKTQG67/blue.svg)](.)
[![Stars](https://img.shields.io/github/stars/ThuCCSLab/Awesome-LM-SSP)](.)[](.)
## Introduction
The resources related to the trustworthiness of large models (LMs) across multiple dimensions (e.g., safety, security, and privacy), with a special focus on multi-modal LMs (e.g., vision-language models and diffusion models).- This repo is in progress :seedling: (currently manually collected).
- Badges:- Model:
- ![LLM](https://img.shields.io/badge/LLM_(Large_Language_Model)-589cf4)
- ![VLM](https://img.shields.io/badge/VLM_(Vision_Language_Model)-c7688b)
- ![SLM](https://img.shields.io/badge/SLM_(Speech_Language_Model)-39c5bb)
- ![Diffusion](https://img.shields.io/badge/Diffusion-a99cf4)- Comment: ![Benchmark](https://img.shields.io/badge/Benchmark-87b800) ![New_dataset](https://img.shields.io/badge/New_dataset-87b800) ![Agent](https://img.shields.io/badge/Agent-87b800) ![CodeGen](https://img.shields.io/badge/CodeGen-87b800) ![Defense](https://img.shields.io/badge/Defense-87b800) ![RAG](https://img.shields.io/badge/RAG-87b800) ![Chinese](https://img.shields.io/badge/Chinese-87b800) ...
- Venue: ![conference](https://img.shields.io/badge/conference-f1b800) ![blog](https://img.shields.io/badge/blog-f1b800) ![OpenAI](https://img.shields.io/badge/OpenAI-f1b800) ![Meta AI](https://img.shields.io/badge/Meta_AI-f1b800) ...
- :sunflower: Welcome to recommend resources to us via Issues with the following format (**please fill in this table**):
| Title | Link | Code | Venue | Classification | Model | Comment |
| ---- |---- |---- |---- |---- |----|----|
| aa | arxiv | github | bb'23 | A1. Jailbreak | LLM | Agent |## News
- [2024.08.17] We collected `34` related papers from [ACL'24](https://2024.aclweb.org/)!
- [2024.05.13] We collected `7` related papers from [S&P'24](https://www.computer.org/csdl/proceedings/sp/2024/1RjE8VKKk1y)!
- [2024.04.27] We adjusted the categories.
- [2024.01.20] We collected `3` related papers from [NDSS'24](https://www.ndss-symposium.org/ndss2024/accepted-papers/)!
- [2024.01.17] We collected `108` related papers from [ICLR'24](https://openreview.net/group?id=ICLR.cc/2024/Conference)!
- [2024.01.09] 🚀 LM-SSP is released!## Collections
- [Book](collection/book.md) (2)
- [Competition](collection/competition.md) (5)
- [Leaderboard](collection/leaderboard.md) (3)
- [Toolkit](collection/toolkit.md) (9)
- [Survey](collection/survey.md) (32)
- Paper (1191)
- A. Safety (670)
- [A0. General](collection/paper/safety/general.md) (15)
- [A1. Jailbreak](collection/paper/safety/jailbreak.md) (258)
- [A2. Alignment](collection/paper/safety/alignment.md) (73)
- [A3. Deepfake](collection/paper/safety/deepfake.md) (54)
- [A4. Ethics](collection/paper/safety/ethics.md) (5)
- [A5. Fairness](collection/paper/safety/fairness.md) (54)
- [A6. Hallucination](collection/paper/safety/hallucination.md) (108)
- [A7. Prompt Injection](collection/paper/safety/prompt_injection.md) (37)
- [A8. Toxicity](collection/paper/safety/toxicity.md) (66)
- B. Security (181)
- [B0. General](collection/paper/security/general.md) (6)
- [B1. Adversarial Examples](collection/paper/security/adversarial_examples.md) (79)
- [B2. Poison & Backdoor](collection/paper/security/poison_&_backdoor.md) (86)
- [B3. System](collection/paper/security/system.md) (10)
- C. Privacy (340)
- [C0. General](collection/paper/privacy/general.md) (24)
- [C1. Contamination](collection/paper/privacy/contamination.md) (13)
- [C2. Copyright](collection/paper/privacy/copyright.md) (115)
- [C3. Data Reconstruction](collection/paper/privacy/data_reconstruction.md) (39)
- [C4. Membership Inference Attacks](collection/paper/privacy/membership_inference_attacks.md) (31)
- [C5. Model Extraction](collection/paper/privacy/model_extraction.md) (10)
- [C6. Privacy-Preserving Computation](collection/paper/privacy/privacy-preserving_computation.md) (60)
- [C7. Property Inference Attacks](collection/paper/privacy/property_inference_attacks.md) (3)
- [C8. Unlearning](collection/paper/privacy/unlearning.md) (45)## Star History
[![Star History Chart](https://api.star-history.com/svg?repos=ThuCCSLab/Awesome-LM-SSP&type=Date)](https://star-history.com/#ThuCCSLab/Awesome-LM-SSP&Date)
## Acknowledgement
- Organizers: [Tianshuo Cong (丛天硕)](https://tianshuocong.github.io/), [Xinlei He (何新磊)](https://xinleihe.github.io/), [Zhengyu Zhao (èµµæ£å®‡)](https://zhengyuzhao.github.io/), [Yugeng Liu (刘禹更)](https://liu.ai/), [Delong Ran (冉德龙)](https://github.com/eggry)
- This project is inspired by [LLM Security](https://llmsecurity.net/), [Awesome LLM Security](https://github.com/corca-ai/awesome-llm-security), [LLM Security & Privacy](https://github.com/chawins/llm-sp), [UR2-LLMs](https://github.com/jxzhangjhu/Awesome-LLM-Uncertainty-Reliability-Robustness), [PLMpapers](https://github.com/thunlp/PLMpapers), [EvaluationPapers4ChatGPT](https://github.com/THU-KEG/EvaluationPapers4ChatGPT)