Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
Awesome-LLM-Uncertainty-Reliability-Robustness
Awesome-LLM-Robustness: a curated list of Uncertainty, Reliability and Robustness in Large Language Models
https://github.com/jxzhangjhu/Awesome-LLM-Uncertainty-Reliability-Robustness
Last synced: 5 days ago
JSON representation
-
Reliability
-
Hallucination
-
Truthfulness
-
Prompt tuning, optimization and design
- [Paper - example-selection)] \
- [Paper
- [Paper
- [Paper - chen/instructzero)] \
- [Paper - NLP-Chang/PromptBoosting)] \
- [Paper
- [Paper - prompt)] \
- [Paper - Box-Prompt-Learning)] \
- [Paper - Box-Tuning)]\
- [Paper - Box-Tuning)] \
- [Paper - science/auto-cot)]\
- [Paper - cot)]\
- [Paper
- [Paper
- [Paper
- [Paper - selective-annotation)]\
-
Instruction and RLHF
-
Fine-tuning
-
Reasoning
-
Tools and external APIs
-
-
Introductory Posts
- [Link
- [Github
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
- [Link
-
Tutorial
-
Uncertainty
-
Uncertainty Estimation
-
Calibration
-
Ambiguity
-
Confidence
-
Active Learning
- [Paper
- [Paper
- [Paper
- [Paper - learning-pretrained-models)] \
- [Paper
- [Paper - prompt)] \
- [Paper
- [Paper
- [Paper
- [Paper
- [Paper - uoregon/famie)] \
- [Paper
- [Paper
- [Paper - active-learning)] \
- [Paper - institute/al_nlp_feasible)] \
- [Paper - de/acl22-revisiting-uncertainty-based-query-strategies-for-active-learning-with-transformers)] \
- [Paper
- [Paper
-
-
Robustness
-
Invariance
-
Distribution Shift
-
Out-of-Distribution
-
Adaptation and Generalization
-
Adversarial
-
Attribution
-
Causality
-
Bias and Fairness
- [Paper - language-models-are-biased-can-logic-help-save-them-0303]] \
- [Paper
- [Paper
- [Paper
- Owain Evans
- Sebastian Farquhar - ->
- Elias Stengel-Eskin - ->
-
-
Technical Reports
-
Evaluation & Survey
- [Paper - ConvAI/tree/main/WideDeep)] \
- [Paper - eval-survey)] \
- [Paper - secure/DecodingTrust/)] [[Website](https://decodingtrust.github.io/)] \
- [Paper
- [Paper
- [Paper
- [Paper - crfm/helm)] [[Blog](https://crfm.stanford.edu/2022/11/17/helm.html)] \
- [Paper - Reliability)] \
- [Paper
- [Paper
- [Paper
- [Paper - benchmark/NL-Augmenter)] \
- [Paper
- [Paper - gym/robustness-gym)] \
- [Paper
- [Paper
Categories
Sub Categories
Uncertainty Estimation
26
Hallucination
23
Active Learning
18
Reasoning
17
Prompt tuning, optimization and design
16
Calibration
14
Ambiguity
9
Instruction and RLHF
7
Bias and Fairness
7
Confidence
7
Truthfulness
6
Fine-tuning
5
Tools and external APIs
5
Adversarial
4
Causality
3
Invariance
2
Adaptation and Generalization
1
Distribution Shift
1
Out-of-Distribution
1
Attribution
1
Keywords
hallucinations
2
nlp
2
chatgpt
2
openai
2
machine-learning
2
prompt
2
prompt-learning
2
prompt-toolkit
2
llms
1
gpt-4
1
openai-api
1
ai
1
bert
1
pre-trained-language-models
1
prompt-based
1
chatgpt-api
1
deep-learning
1
few-shot-learning
1
gpt
1
gpt-3
1
prompt-based-learning
1
prompt-engineering
1
prompt-generator
1
prompt-tuning
1
promptengineering
1
text-to-image
1
text-to-speech
1
text-to-video
1
generative-ai
1
llm
1