{"id":28676552,"url":"https://github.com/zjunlp/easydetect","last_synced_at":"2025-06-13T23:05:06.390Z","repository":{"id":234705705,"uuid":"789396389","full_name":"zjunlp/EasyDetect","owner":"zjunlp","description":"[ACL 2024]  An Easy-to-use Hallucination Detection Framework for LLMs.","archived":false,"fork":false,"pushed_at":"2025-02-25T12:05:17.000Z","size":12014,"stargazers_count":30,"open_issues_count":1,"forks_count":1,"subscribers_count":4,"default_branch":"main","last_synced_at":"2025-02-25T13:22:28.777Z","etag":null,"topics":["aigc","artificial-intelligence","easydetect","generation","generative-ai","hallucination","hallucination-detection","knowledge-editing","knowledge-graph","knowlm","large-language-models","model-editing","multimodal","multimodal-large-language-models","natural-language-processing"],"latest_commit_sha":null,"homepage":"https://zjunlp.github.io/project/EasyDetect","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/zjunlp.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-04-20T12:34:23.000Z","updated_at":"2025-02-25T12:05:22.000Z","dependencies_parsed_at":"2024-04-28T04:13:03.241Z","dependency_job_id":"1b0fc8bf-076d-4670-abf8-c89da3db746e","html_url":"https://github.com/zjunlp/EasyDetect","commit_stats":null,"previous_names":["zjunlp/easydetect"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/zjunlp/EasyDetect","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/zjunlp%2FEasyDetect","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/zjunlp%2FEasyDetect/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/zjunlp%2FEasyDetect/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/zjunlp%2FEasyDetect/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/zjunlp","download_url":"https://codeload.github.com/zjunlp/EasyDetect/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/zjunlp%2FEasyDetect/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":259732771,"owners_count":22903087,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["aigc","artificial-intelligence","easydetect","generation","generative-ai","hallucination","hallucination-detection","knowledge-editing","knowledge-graph","knowlm","large-language-models","model-editing","multimodal","multimodal-large-language-models","natural-language-processing"],"created_at":"2025-06-13T23:05:05.698Z","updated_at":"2025-06-13T23:05:06.382Z","avatar_url":"https://github.com/zjunlp.png","language":"Python","readme":"\u003cdiv align=\"center\"\u003e\n\n\u003cimg src=\"figs/easydetect.jpg\" width=\"18%\" height=\"18%\"\u003e\n\n**An Easy-to-Use Multimodal Hallucination Detection Framework for MLLMs**\n\n\n---\n$\\color{red}{It\\'s\\  unfortunate\\  that\\  due\\  to\\  limited\\  computational\\  resources,\\  we\\  have\\  suspended\\  the\\  online\\  demo.}$\n$\\color{red}{If\\  you\\'d\\  like\\  to\\  try\\  the\\  demo,\\  please\\  contact\\  sunnywcx \\@zju.edu.cn\\  or\\  zhangningyu \\@zju.edu.cn}$\n\n\n\u003cp align=\"center\"\u003e\n  \u003ca href=\"#citation\"\u003e🌻Acknowledgement\u003c/a\u003e •\n  \u003ca href=\"https://huggingface.co/datasets/openkg/MHaluBench\"\u003e🤗Benchmark\u003c/a\u003e •\n  \u003ca href=\"http://easydetect.zjukg.cn/\"\u003e🍎Demo\u003c/a\u003e •\n  \u003ca href=\"#overview\"\u003e🌟Overview\u003c/a\u003e •\n  \u003ca href=\"#modelzoo\"\u003e🐧ModelZoo\u003c/a\u003e •\n  \u003ca href=\"#installation\"\u003e🔧Installation\u003c/a\u003e •\n  \u003ca href=\"#quickstart\"\u003e⏩Quickstart\u003c/a\u003e •\n  \u003ca href=\"#citation\"\u003e🚩Citation\u003c/a\u003e \n  \u003c!-- \u003ca href=\"#contributors\"\u003e🎉Contributors\u003c/a\u003e --\u003e\n\u003c/p\u003e\n\n![](https://img.shields.io/badge/version-v0.1.1-blue)\n![](https://img.shields.io/github/last-commit/zjunlp/EasyDetect?color=green) \n![](https://img.shields.io/badge/PRs-Welcome-red) \n\u003c!-- [![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](https://opensource.org/licenses/MIT) --\u003e\n\n\u003c/div\u003e\n\n## Table of Contents\n\n- \u003ca href=\"#acknowledgement\"\u003e🌻Acknowledgement\u003c/a\u003e\n- \u003ca href=\"#overview\"\u003e🌟Overview\u003c/a\u003e\n  - \u003ca href=\"#unified-multimodal-hallucination\"\u003eUnified Multimodal Hallucination \u003c/a\u003e\n  - \u003ca href=\"#dataset-mhallubench-statistic\"\u003eDataset: MHalluBench Statistic\u003c/a\u003e\n  - \u003ca href=\"#framework-uniHD-illustration\"\u003eFramework: UniHD Illustration\u003c/a\u003e\n- \u003ca href=\"#modelzoo\"\u003e🐧ModelZoo\u003c/a\u003e\n- \u003ca href=\"#installation\"\u003e🔧Installation\u003c/a\u003e\n- \u003ca href=\"#quickstart\"\u003e⏩Quickstart\u003c/a\u003e\n- \u003ca href=\"#citation\"\u003e🚩Citation\u003c/a\u003e\n---\n## 🔔News\n## News\n- **2024-05-17 The paper [Unified Hallucination Detection for Multimodal Large Language Models](https://arxiv.org/abs/2402.03190) is accepted by ACL 2024 main conference.**\n- **2024-04-21 We replace all the base models in the demo with our own trained models, significantly reducing the inference time.**\n- **2024-04-21 We release our open-source hallucination detection model HalDet-LLAVA, which can be downloaded in huggingface, modelscope and wisemodel.**\n- **2024-02-10 We release the EasyDetect [demo](http://easydetect.zjukg.cn/)**.\n- **2024-02-05  We release the paper:\"[Unified Hallucination Detection for Multimodal Large Language Models](https://arxiv.org/abs/2402.03190)\" with a new benchmark [MHaluBench](https://huggingface.co/datasets/openkg/MHaluBench)! We are looking forward to any comments or discussions on this topic :)**\n- **2023-10-20 The EasyDetect project has been launched and is under development.**\n\n\n## 🌻Acknowledgement\n\nPart implementation of this project were assisted and inspired by the related hallucination toolkits including [FactTool](https://github.com/GAIR-NLP/factool), [Woodpecker](https://github.com/BradyFU/Woodpecker), and others. \nThis repository also benefits from the public project from [mPLUG-Owl](https://github.com/X-PLUG/mPLUG-Owl), [MiniGPT-4](https://github.com/Vision-CAIR/MiniGPT-4), [LLaVA](https://github.com/haotian-liu/LLaVA), [GroundingDINO](https://github.com/IDEA-Research/GroundingDINO), and [MAERec ](https://github.com/Mountchicken/Union14M). \nWe follow the same license for open-sourcing and thank them for their contributions to the community.\n\n\n\n## 🌟Overview\n\nEasyDetect is a systematic package which is proposed as an easy-to-use hallucination detection framework for Multimodal Large Language Models(MLLMs) like GPT-4V, Gemini, LlaVA in your research experiments. \n\n### Unified Multimodal Hallucination\n\n#### Unified View of Detection\n\nA prerequisite for unified detection is the coherent categorization of the principal categories of hallucinations within MLLMs. Our paper superficially examines the following Hallucination Taxonomy from a unified perspective:\n\n\u003cp align=\"center\"\u003e\n\u003cimg src=\"figs/view.png\"  width=\"60%\" height=\"60%\"\u003e\n\u003cimg src=\"figs/intro.png\" width=\"60%\" height=\"60%\"\u003e\n\u003c/p\u003e\n\n**Figure 1:** Unified multimodal hallucination detection aims to identify and detect modality-conflicting hallucinations at\nvarious levels such as object, attribute, and scene-text, as well as fact-conflicting hallucinations in both image-to-text and text-to-image generation.\n\n**Modality-Conflicting Hallucination.**  MLLMs sometimes generate outputs that conflict with inputs from other modalities, leading to issues such as incorrect objects, attributes, or scene text. An example in above Figure (a) includes an MLLM inaccurately describing an athlete's uniform , showcasing an attribute-level conflict due to MLLMs' limited ability to achieve fine-grained text-image alignment.\n\n**Fact-Conflicting Hallucination.** Outputs from MLLMs may contradict established factual knowledge. Image-to-text models can generate narratives that stray from the actual content by incorporating irrelevant facts, while text-to-image models may produce visuals that fail to reflect the factual knowledge contained in text prompts. These discrepancies underline the struggle of MLLMs to maintain factual consistency, representing a significant challenge in the domain.\n\n#### Fine-grained Detection Task Definition\n\nUnified detection of multimodal hallucination necessitates the check of each image-text pair `a={v, x}`, wherein `v` denotes either the visual input provided to an MLLM, or the visual output synthesized by it. Correspondingly, `x` signifies the MLLM's generated textual response based on `v` or the textual user query for synthesizing `v`. Within this task, each `x` may contain multiple claims, denoted as $\\{c_i\\}\\_\\{i = 1 \\cdots n\\}$. The objective for hallucination detectors is to assess each claim from `a` to determine whether it is \"hallucinatory\" or \"non-hallucinatory\", providing a rationale for their judgments based on the provided definition of hallucination. Text hallucination detection from LLMs denotes a sub-case in this setting, where `v` is null.\n\n### Dataset: MHalluBench Statistic\n\nTo advance this research trajectory, we introduce the meta-evaluation benchmark MHaluBench, which encompasses the content from image-to-text and text-to-image generation, aiming to rigorously assess the advancements in multimodal halluci-\nnation detectors. Further statistical details about MHaluBench are provided in below Figures.\n\n\u003cimg src=\"figs/datasetinfo.jpg\"\u003e\n\n**Table 1:** *A comparison of benchmarks with respect to existing fact-checking or hallucination evaluation.* \"Check.\" indicates verifying factual consistency, \"Eval.\" denotes evaluating hallucinations generated by different LLMs, and its response is based on different LLMs under test, while \"Det.\" embodies the evaluation of a detector’s capability in identifying hallucinations.\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"figs/饼图.png\" width=\"40%\" height=\"40%\"\u003e\n\u003c/p\u003e\n\n**Figure 2:** *Claim-Level data statistics of MHaluBench.* \"IC\" signifies Image Captioning and \"T2I\" indicates Text-to-Image synthesis, respectively.\n\n\u003cp align=\"center\"\u003e\n\u003cimg src=\"figs/条形图.png\"   width=\"50%\" height=\"50%\"\u003e\n\u003c/p\u003e\n\n**Figure 3:** *Distribution of hallucination categories within hallucination-labeled claims of MHaluBench.* \n\n### Framework: UniHD Illustration\n\nAddressing the key challenges in hallucination detection, we introduce a unified framework in Figure 4 that systematically tackles multimodal hallucination identification for both image-to-text and text-to-image tasks. Our framework capitalizes on the domain-specific strengths of various tools to efficiently gather multi-modal evidence for confirming hallucinations. \n\n\u003cimg src=\"figs/framework.png\"\u003e\n\n**Figure 4:** *The specific illustration of UniHD for unified multimodal hallucination detection.* \n\n---\n\n## 🐧ModelZoo\nYou can download two versions of HalDet-LLaVA, 7b and 13b, on three platforms: HuggingFace, ModelScope, and WiseModel.\n| HuggingFace |  ModelScope |  WiseModel  |\n| ----------- | ----------- | ----------- |\n| [HalDet-llava-7b](https://huggingface.co/zjunlp/HalDet-llava-7b)     | [HalDet-llava-7b](https://www.modelscope.cn/models/ZJUNLP/HalDet-llava-7b)       | [HalDet-llava-7b](https://www.wisemodel.cn/models/zjunlp/HalDet-llava-7b)       |\n| [HalDet-llava-13b](https://huggingface.co/zjunlp/HalDet-llava-13b)   | [HalDet-llava-13b](https://www.modelscope.cn/models/ZJUNLP/HalDet-llava-13b)        | [HalDet-llava-13b](https://www.wisemodel.cn/models/zjunlp/HalDet-llava-13b)      |\n\nThe claim level results on validation dataset\n- Self-Check(GPT-4V) means use GPT-4V with 0 or 2 cases\n- UniHD(GPT-4V/GPT-4o) means use GPT-4V/GPT-4o with 2-shot and tool information\n- HalDet (LLAVA) means use LLAVA-v1.5 trained on our train datasets\n\u003ctable\u003e\n    \u003ctr\u003e\n        \u003ctd\u003etask type\u003c/td\u003e\n        \u003ctd\u003emodel\u003c/td\u003e\n        \u003ctd\u003eAcc\u003c/td\u003e\n        \u003ctd\u003ePrec avg\u003c/td\u003e\n        \u003ctd\u003eRecall avg\u003c/td\u003e\n        \u003ctd\u003eMac.F1\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd rowspan=\"6\"\u003eimage-to-text\u003c/td\u003e\n        \u003ctd\u003eSelf-Check 0shot (GPV-4V)\u003c/td\u003e\n        \u003ctd\u003e75.09\u003c/td\u003e \n        \u003ctd\u003e74.94\u003c/td\u003e\n        \u003ctd\u003e75.19\u003c/td\u003e\n        \u003ctd\u003e74.97\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd\u003eSelf-Check 2shot (GPV-4V)\u003c/td\u003e\n        \u003ctd\u003e79.25\u003c/td\u003e\n        \u003ctd\u003e79.02\u003c/td\u003e\n        \u003ctd\u003e79.16\u003c/td\u003e\n        \u003ctd\u003e79.08\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd\u003eHalDet (LLAVA-7b)\u003c/td\u003e\n        \u003ctd\u003e75.02\u003c/td\u003e\n        \u003ctd\u003e75.05\u003c/td\u003e\n        \u003ctd\u003e74.18\u003c/td\u003e\n        \u003ctd\u003e74.38\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd\u003eHalDet (LLAVA-13b)\u003c/td\u003e\n        \u003ctd\u003e78.16\u003c/td\u003e\n        \u003ctd\u003e78.18\u003c/td\u003e\n        \u003ctd\u003e77.48\u003c/td\u003e\n        \u003ctd\u003e77.69\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd\u003eUniHD(GPT-4V)\u003c/td\u003e\n        \u003ctd\u003e81.91\u003c/td\u003e\n        \u003ctd\u003e81.81\u003c/td\u003e\n        \u003ctd\u003e81.52\u003c/td\u003e\n        \u003ctd\u003e81.63\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd\u003eUniHD(GPT-4o)\u003c/td\u003e\n        \u003ctd\u003e86.08\u003c/td\u003e\n        \u003ctd\u003e85.89\u003c/td\u003e\n        \u003ctd\u003e86.07\u003c/td\u003e\n        \u003ctd\u003e85.96\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd rowspan=\"6\"\u003etext-to-image\u003c/td\u003e\n        \u003ctd\u003eSelf-Check 0shot (GPV-4V)\u003c/td\u003e\n        \u003ctd\u003e76.20\u003c/td\u003e\n        \u003ctd\u003e79.31\u003c/td\u003e\n        \u003ctd\u003e75.99\u003c/td\u003e\n        \u003ctd\u003e75.45\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd\u003eSelf-Check 2shot (GPV-4V)\u003c/td\u003e\n        \u003ctd\u003e80.76\u003c/td\u003e\n        \u003ctd\u003e81.16\u003c/td\u003e\n        \u003ctd\u003e80.69\u003c/td\u003e\n        \u003ctd\u003e80.67\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd\u003eHalDet (LLAVA-7b)\u003c/td\u003e\n        \u003ctd\u003e67.35\u003c/td\u003e\n        \u003ctd\u003e69.31\u003c/td\u003e\n        \u003ctd\u003e67.50\u003c/td\u003e\n        \u003ctd\u003e66.62\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd\u003eHalDet (LLAVA-13b)\u003c/td\u003e\n        \u003ctd\u003e74.74\u003c/td\u003e\n        \u003ctd\u003e76.68\u003c/td\u003e\n        \u003ctd\u003e74.88\u003c/td\u003e\n        \u003ctd\u003e74.34\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd\u003eUniHD(GPT-4V)\u003c/td\u003e\n        \u003ctd\u003e85.82\u003c/td\u003e\n        \u003ctd\u003e85.83\u003c/td\u003e\n        \u003ctd\u003e85.83\u003c/td\u003e\n        \u003ctd\u003e85.82\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd\u003eUniHD(GPT-4o)\u003c/td\u003e\n        \u003ctd\u003e89.29\u003c/td\u003e\n        \u003ctd\u003e89.28\u003c/td\u003e\n        \u003ctd\u003e89.28\u003c/td\u003e\n        \u003ctd\u003e89.28\u003c/td\u003e\n    \u003c/tr\u003e\n\u003c/table\u003e\n\nTo view more detailed information about HalDet-LLaVA and the train dataset, please refer to the [readme](https://github.com/zjunlp/EasyDetect/blob/main/HalDet-LLaVA/README.md).\n\n## 🔧Installation\n\n**Installation for local development:**\n```\nconda create -n easydetect python=3.9.19\ngit clone https://github.com/zjunlp/EasyDetect.git\ncd EasyDetect\npip install -r requirements.txt\n```\n\n**Installation for tools(GroundingDINO and MAERec):**\n```\n# install GroundingDINO\ngit clone https://github.com/IDEA-Research/GroundingDINO.git\ncp -r GroundingDINO pipeline/GroundingDINO\ncd pipeline/GroundingDINO/\npip install -e .\ncd ..\n\n# install MAERec\ngit clone https://github.com/Mountchicken/Union14M.git\ncp -r Union14M/mmocr-dev-1.x pipeline/mmocr\ncd pipeline/mmocr/\npip install -U openmim\nmim install mmengine\nmim install mmcv\nmim install mmdet\npip install timm\npip install -r requirements/albu.txt\npip install -r requirements.txt\npip install -v -e .\ncd ..\n\nmkdir weights\ncd weights\nwget -q https://github.com/IDEA-Research/GroundingDINO/releases/download/v0.1.0-alpha/groundingdino_swint_ogc.pth\nwget https://download.openmmlab.com/mmocr/textdet/dbnetpp/dbnetpp_resnet50-oclip_fpnc_1200e_icdar2015/dbnetpp_resnet50-oclip_fpnc_1200e_icdar2015_20221101_124139-4ecb39ac.pth -O dbnetpp.pth\nwget https://github.com/Mountchicken/Union14M/releases/download/Checkpoint/maerec_b_union14m.pth -O maerec_b.pth\ncd ..\n```\n\n---\n\n## ⏩Quickstart\n\nWe provide example code for users to quickly get started with EasyDetect.\n\n#### Step1: Write a configuration file in yaml format\n\nUsers can easily configure the parameters of EasyDetect in a yaml file or just quickly use the default parameters in the configuration file we provide. The path of the configuration file is EasyDetect/pipeline/config/config.yaml\n\n```yaml\nopenai:\n  api_key: Input your openai api key\n  base_url: Input base_url, default is None\n  temperature: 0.2  \n  max_tokens: 1024\ntool: \n  detect:\n    groundingdino_config: the path of GroundingDINO_SwinT_OGC.py\n    model_path: the path of groundingdino_swint_ogc.pth\n    device: cuda:0\n    BOX_TRESHOLD: 0.35\n    TEXT_TRESHOLD: 0.25\n    AREA_THRESHOLD: 0.001\n  ocr:\n    dbnetpp_config: the path of dbnetpp_resnet50-oclip_fpnc_1200e_icdar2015.py\n    dbnetpp_path: the path of dbnetpp.pth\n    maerec_config: the path of maerec_b_union14m.py\n    maerec_path: the path of maerec_b.pth\n    device: cuda:0\n    content: word.number\n    cachefiles_path: the path of cache_files to save temp images\n    BOX_TRESHOLD: 0.2\n    TEXT_TRESHOLD: 0.25\n  google_serper:\n    serper_api_key: Input your serper api key\n    snippet_cnt: 10\nprompts:\n  claim_generate: pipeline/prompts/claim_generate.yaml\n  query_generate: pipeline/prompts/query_generate.yaml\n  verify: pipeline/prompts/verify.yaml\n```\n\n#### Step2: Run with the Example Code\nExample Code\n```python\nfrom pipeline.run_pipeline import *\npipeline = Pipeline()\ntext = \"The cafe in the image is named \\\"Hauptbahnhof\\\"\"\nimage_path = \"./examples/058214af21a03013.jpg\"\ntype = \"image-to-text\"\nresponse, claim_list = pipeline.run(text=text, image_path=image_path, type=type)\nprint(response)\nprint(claim_list)\n```\n\n\n\n---\n## 🚩Citation\n\nPlease cite our repository if you use EasyDetect in your work.\n\n```bibtex\n@article{chen23factchd,\n  author       = {Xiang Chen and Duanzheng Song and Honghao Gui and Chengxi Wang and Ningyu Zhang and \n                  Jiang Yong and Fei Huang and Chengfei Lv and Dan Zhang and Huajun Chen},\n  title        = {FactCHD: Benchmarking Fact-Conflicting Hallucination Detection},\n  journal      = {CoRR},\n  volume       = {abs/2310.12086},\n  year         = {2023},\n  url          = {https://doi.org/10.48550/arXiv.2310.12086},\n  doi          = {10.48550/ARXIV.2310.12086},\n  eprinttype    = {arXiv},\n  eprint       = {2310.12086},\n  biburl       = {https://dblp.org/rec/journals/corr/abs-2310-12086.bib},\n  bibsource    = {dblp computer science bibliography, https://dblp.org}\n}\n@inproceedings{chen-etal-2024-unified-hallucination,\n    title = \"Unified Hallucination Detection for Multimodal Large Language Models\",\n    author = \"Chen, Xiang  and\n      Wang, Chenxi  and\n      Xue, Yida  and\n      Zhang, Ningyu  and\n      Yang, Xiaoyan  and\n      Li, Qiang  and\n      Shen, Yue  and\n      Liang, Lei  and\n      Gu, Jinjie  and\n      Chen, Huajun\",\n    editor = \"Ku, Lun-Wei  and\n      Martins, Andre  and\n      Srikumar, Vivek\",\n    booktitle = \"Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)\",\n    month = aug,\n    year = \"2024\",\n    address = \"Bangkok, Thailand\",\n    publisher = \"Association for Computational Linguistics\",\n    url = \"https://aclanthology.org/2024.acl-long.178\",\n    pages = \"3235--3252\",\n}\n```\n\n---\n\n\n\n## 🎉Contributors\n\n\u003ca href=\"https://github.com/OpenKG-ORG/EasyDetect/graphs/contributors\"\u003e\n  \u003cimg src=\"https://contrib.rocks/image?repo=OpenKG-ORG/EasyDetect\" /\u003e\n\u003c/a\u003e\n\nWe will offer long-term maintenance to fix bugs, solve issues and meet new requests. So if you have any problems, please put issues to us.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fzjunlp%2Feasydetect","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fzjunlp%2Feasydetect","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fzjunlp%2Feasydetect/lists"}