Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
awesome-hallucination-detection
List of papers on hallucination detection in LLMs.
https://github.com/EdinburghNLP/awesome-hallucination-detection
Last synced: 6 days ago
JSON representation
-
Overviews, Surveys, and Shared Tasks
-
[Retrieval-Based Prompt Selection for Code-Related Few-Shot Learning](https://people.ece.ubc.ca/amesbah/resources/papers/cedar-icse23.pdf)
- A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions
- SemEval-2024 Task-6 - SHROOM, a Shared-task on Hallucinations and Related Observable Overgeneration Mistakes
- llm-hallucination-survey
- here
- Siren’s Song in the AI Ocean: A Survey on Hallucination in Large Language Models
- Mitigating LLM Hallucinations: a multifaceted approach
- LLM Powered Autonomous Agents
- How Do Large Language Models Capture the Ever-changing World Knowledge? A Review of Recent Advances
-
-
Measuring Hallucinations in LLMs
-
[Retrieval-Based Prompt Selection for Code-Related Few-Shot Learning](https://people.ece.ubc.ca/amesbah/resources/papers/cedar-icse23.pdf)
- Vectara LLM Hallucination Leaderboard
- TofuEval: Evaluating Hallucinations of LLMs on Topic-Focused Dialogue Summarization
- AnyScale - Llama 2 is about as factually accurate as GPT-4 for summaries and is 30X cheaper
- Arthur.ai - Hallucination Experiment
- Vectara - Cut the Bull…. Detecting Hallucinations in Large Language Models
-
[Do Androids Know They're Only Dreaming of Electric Sheep?](https://arxiv.org/abs/2312.17249)
-
-
Open Source Models for Measuring Hallucinations
-
[Retrieval-Based Prompt Selection for Code-Related Few-Shot Learning](https://people.ece.ubc.ca/amesbah/resources/papers/cedar-icse23.pdf)
-
-
Definitions and Notes
-
Extrinsic and Intrinsic Hallucinations
-
-
Papers and Summaries
-
[Correction with Backtracking Reduces Hallucination in Summarization](https://arxiv.org/abs/2310.16176)
-
-
Taxonomies
-
[Retrieval-Based Prompt Selection for Code-Related Few-Shot Learning](https://people.ece.ubc.ca/amesbah/resources/papers/cedar-icse23.pdf)
- Internal Consistency and Self-Feedback in Large Language Models: A Survey - Feedback framework.
- The Dawn After the Dark: An Empirical Study on Factuality Hallucination in Large Language Models - error Hallucination, Relation-error Hallucination, Incompleteness Hallucination, Outdatedness Hallucination, Overclaim Hallucination, Unverifiability Hallucination.
- A Survey of Hallucination in “Large” Foundation Models - specific LLMs*.
-
Programming Languages
Categories
Sub Categories
[Retrieval-Based Prompt Selection for Code-Related Few-Shot Learning](https://people.ece.ubc.ca/amesbah/resources/papers/cedar-icse23.pdf)
22
Extrinsic and Intrinsic Hallucinations
2
[Do Androids Know They're Only Dreaming of Electric Sheep?](https://arxiv.org/abs/2312.17249)
1
[Correction with Backtracking Reduces Hallucination in Summarization](https://arxiv.org/abs/2310.16176)
1