Projects in Awesome Lists by LLM-Tuning-Safety
A curated list of projects in awesome lists by LLM-Tuning-Safety .
https://github.com/LLM-Tuning-Safety/LLMs-Finetuning-Safety
We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20 via OpenAI’s APIs.
Last synced: 02 Dec 2024