{"id":28676532,"url":"https://github.com/zjunlp/deco","last_synced_at":"2025-06-13T23:05:05.087Z","repository":{"id":258102754,"uuid":"864950608","full_name":"zjunlp/Deco","owner":"zjunlp","description":"[ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation","archived":false,"fork":false,"pushed_at":"2024-12-10T04:24:27.000Z","size":18503,"stargazers_count":82,"open_issues_count":0,"forks_count":7,"subscribers_count":4,"default_branch":"main","last_synced_at":"2025-06-02T08:45:58.023Z","etag":null,"topics":["artificial-intelligence","decoding","doco","hallucination","hallucination-mitigation","iclr2025","large-language-models","mllm","multimodal-large-language-models","natural-language-processing"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/zjunlp.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-09-29T15:48:53.000Z","updated_at":"2025-05-28T08:53:05.000Z","dependencies_parsed_at":null,"dependency_job_id":"3e95e74e-d26d-4613-b74d-7072b2fa5ff5","html_url":"https://github.com/zjunlp/Deco","commit_stats":null,"previous_names":["zjunlp/deco"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/zjunlp/Deco","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/zjunlp%2FDeco","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/zjunlp%2FDeco/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/zjunlp%2FDeco/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/zjunlp%2FDeco/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/zjunlp","download_url":"https://codeload.github.com/zjunlp/Deco/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/zjunlp%2FDeco/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":259732771,"owners_count":22903087,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["artificial-intelligence","decoding","doco","hallucination","hallucination-mitigation","iclr2025","large-language-models","mllm","multimodal-large-language-models","natural-language-processing"],"created_at":"2025-06-13T23:05:02.038Z","updated_at":"2025-06-13T23:05:05.059Z","avatar_url":"https://github.com/zjunlp.png","language":"Python","readme":"# MLLM Can See? Dynamic Correction Decoding For Hallucination Mitigation\n\n\u003c!-- [![License: MIT](https://img.shields.io/badge/License-MIT-g.svg)](https://opensource.org/licenses/MIT)\n[![Arxiv](https://img.shields.io/badge/arXiv-2311.17911-B21A1B)](https://arxiv.org/abs/2410.11779)\n[![Hugging Face Transformers](https://img.shields.io/badge/%F0%9F%A4%97-Transformers-blue)](https://github.com/huggingface/transformers)\n[![GitHub Stars](https://img.shields.io/github/stars/shikiw/OPERA?style=social)](https://github.com/shikiw/OPERA/stargazers) --\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003ca href=\"https://www.arxiv.org/abs/2410.11779\"\u003e📄arXiv\u003c/a\u003e •\n  \u003ca href=\"https://huggingface.co/papers/2410.11779\"\u003e🤗HFPaper\u003c/a\u003e •\n  \u003ca href=\"https://notebooklm.google.com/notebook/e41ab929-0cf5-45d2-a5b7-c3c109e2baac/audio\"\u003e🎧NotebookLM Audio\u003c/a\u003e\n\u003c/p\u003e\n\n\n\nThis repository provides the official PyTorch implementation of the following paper: \n\u003e [**MLLM Can See? Dynamic Correction Decoding For Hallucination Mitigation**](https://arxiv.org/abs/2410.11779) \u003cbr\u003e\n\u003e Chenxi Wang\u003csup\u003e1\u003c/sup\u003e, \n\u003e Xiang Chen\u003csup\u003e1\u003c/sup\u003e, \n\u003e Ningyu Zhang\u003csup\u003e1\u003c/sup\u003e,\n\u003e Bozhong Tian\u003csup\u003e1\u003c/sup\u003e,\n\u003e Haoming Xu\u003csup\u003e1\u003c/sup\u003e, \n\u003e Shumin Deng\u003csup\u003e2\u003c/sup\u003e,\n\u003e Huajun Chen\u003csup\u003e1\u003c/sup\u003e \u003cbr\u003e\n\u003e \u003csup\u003e1\u003c/sup\u003eZhejiang University, \u003csup\u003e2\u003c/sup\u003eNational University of Singapore \u003cbr\u003e\n\n\n## Overview\n\n\u003cp align=\"center\"\u003e\u003cimg src=\"img/method.png\" alt=\"teaser\" width=\"500px\" /\u003e\u003c/p\u003e\n\nMultimodal Large Language Models (MLLMs) frequently exhibit hallucination phenomena, but the underlying reasons remain poorly understood. In this paper, we present an empirical analysis and find that, although MLLMs incorrectly generate the targets in the final output, they are actually able to recognize visual objects in the preceding layers. We speculate that this may be due to the strong knowledge priors of the language model suppressing the visual information, leading to hallucinations. Motivated by this, we propose a novel dynamic correction decoding method for MLLMs (DeCo), which adaptively selects the appropriate preceding layers and proportionally integrates knowledge into the final layer to adjust the output logits. Note that DeCo is model agnostic and can be seamlessly incorporated with various classic decoding strategies and applied to different MLLMs. We evaluate DeCo on widely-used benchmarks, demonstrating that it can reduce hallucination rates by a large margin compared to baselines, highlighting its potential to mitigate hallucinations.\n\n## Demo\nThere is a demonstration of using DeCo. We provide a handy [Jupyter Notebook](https://github.com/zjunlp/Deco/blob/main/DeCo_examples.ipynb)!\n## Setup\n\nThe main implementation of Deco is in `transformers/generation/utils.py`.\n\n```\nconda create -n deco python==3.9\nconda activate deco\npip install -r requirements.txt\n```\n\n## TL;DR\nAfter setup the environment, you can directly use Deco on your own MLLM model by:\n```\nwith torch.no_grad():\n    output_dict = model.generate(\n                    input_ids,\n                    images=image_tensor,\n                    do_sample=args.do_sample\n                    temperature=args.temperature,\n                    top_p=args.top_p,\n                    num_beams=args.num_beams,\n                    max_new_tokens=args.max_new_tokens,\n                    return_dict_in_generate=True,\n                    output_hidden_states=True,\n                    use_deco = True,\n                    alpha = 0.6,\n                    threshold_top_p = 0.9, \n                    threshold_top_k = 20,\n                    early_exit_layers=[i for i in range(20, 29)]\n                    )\n                \noutput_ids = output_dict.sequences\noutputs = tokenizer.batch_decode(output_ids)\n```\n\u003c!-- \nPlease refer to `demo.ipynb` [here](https://github.com/shikiw/OPERA/blob/1e74d8b5d082579c81e0e77ef1cf4a44d20ab91e/demo.ipynb) for more details. --\u003e\n\n\n## Evaluation\n\nThe following evaluation requires for MSCOCO 2014 dataset. Please download [here](https://cocodataset.org/#home) and extract it in your data path.\n\n\n### Arguments\n\n| Argument             | Example             | Description   |\n| -------------------- | ------------------- | ------------- |\n| `--data-path`     | `/path/to/dataset` | Path to the dataset file or folder, e.g., `COCO_2014/val2014/`. |\n| `--alpha`   | `0.5` | The scale factor to scale up the calibration strength. |\n| `--threshold_top_p`      | `0.9` | The threshold for controlling the number of candidate tokens. |\n| `--early-exit-layers`   | `range(20,29)` | The candidate layer interval can be adjusted appropriately according to the model. |\n\n\n### CHAIR\n- Generate the MLLM's responses and save them in a jsonl file:\n```bash\npython chair_llava.py\n```\n\u003c!-- Note: Please check out our released results in `log/chair_eval_results` for reproduction. --\u003e\n\n- Calculate CHAIR using the generated jsonl file:\n```bash\npython chair.py --cap_file /path/to/jsonl --image_id_key image_id --caption_key caption --coco_path /path/to/COCO/annotations_trainval2014/annotations/ --save_path /path/to/save/jsonl\n```\n\n### AMBER\n- Generate the MLLM's responses and save them in a jsonl file:\n```bash\npython amber_llava.py\n```\n\n- Calculate metric score using the generated jsonl file:\n```bash\npython inference.py\n```\n\n\n\n\n### POPE\n```bash\npython pope_eval.py \n```\n### MME\n```bash\npython mme_llava.py\n```\n### Additional Experiment's Results\nWe compare the baseline, DoLa, DeCo, and the combination of DoLa and DeCo on the LLM benchmark, such as StrategyQA and GSM8K, using llama-7b.\n\n| Method            | StrategyQA             | GSM8K   |\n| -------------------- | ------------------- | ------------- |\n| baseline    | 59.8 | 10.8 |\n| DoLa   | 64.1 | 10.5 |\n| DeCo      | 61.2 | 10.2 |\n| DoLa+DeCo   | 60.0 | 9.6 |\n\n\nWe compare the baseline, DoLa, DeCo, and the combination of DoLa and DeCo on CHAIR, using llava-v1.5-7b.\n\n| Method            | CHAIRs             | CHAIRi   |\n| -------------------- | ------------------- | ------------- |\n| baseline    | 45.0 | 14.7 |\n| DoLa   | 47.8 | 13.8 |\n| DeCo      | 37.8 | 11.1 |\n| DoLa+DeCo   | 44.2 | 11.9 |\n\n## Reference Repositories\n- DoLa: https://github.com/voidism/DoLa\n- OPERA: https://github.com/shikiw/OPERA\n- VCD: https://github.com/DAMO-NLP-SG/VCD\n- LLaVA: https://github.com/haotian-liu/LLaVA\n- MiniGPT4: https://github.com/Vision-CAIR/MiniGPT-4\n\n## Acknowledgement\nThe repository references the code from [DoLA](https://github.com/voidism/DoLa) and [OPERA](https://github.com/shikiw/OPERA) and utilizes MLLM codebase of [LLaVA](https://github.com/haotian-liu/LLaVA) and [MiniGPT4](https://github.com/Vision-CAIR/MiniGPT-4). We extend our gratitude to the authors for their outstanding work.\n\n\n## Citation\nIf you find this work useful for your research, please cite [our paper](https://arxiv.org/abs/2410.11779):\n```\n@misc{wang2024mllmseedynamiccorrection,\n      title={MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation}, \n      author={Chenxi Wang and Xiang Chen and Ningyu Zhang and Bozhong Tian and Haoming Xu and Shumin Deng and Huajun Chen},\n      year={2024},\n      eprint={2410.11779},\n      archivePrefix={arXiv},\n      primaryClass={cs.CL},\n      url={https://arxiv.org/abs/2410.11779}, \n}\n```\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fzjunlp%2Fdeco","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fzjunlp%2Fdeco","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fzjunlp%2Fdeco/lists"}