{"id":24588611,"url":"https://github.com/openbmb/visrag","last_synced_at":"2025-10-05T13:31:44.746Z","repository":{"id":257987604,"uuid":"872632465","full_name":"OpenBMB/VisRAG","owner":"OpenBMB","description":"Parsing-free RAG supported by VLMs","archived":false,"fork":false,"pushed_at":"2025-02-19T10:20:56.000Z","size":15462,"stargazers_count":725,"open_issues_count":3,"forks_count":57,"subscribers_count":12,"default_branch":"master","last_synced_at":"2025-06-08T16:44:38.319Z","etag":null,"topics":["document-retrieval","document-understanding","multi-modal","multi-modality","rag","retrieval","retrieval-augmented-generation","vision-language-model"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/OpenBMB.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-10-14T19:29:00.000Z","updated_at":"2025-06-08T13:43:29.000Z","dependencies_parsed_at":"2024-10-20T14:55:52.476Z","dependency_job_id":null,"html_url":"https://github.com/OpenBMB/VisRAG","commit_stats":null,"previous_names":["openbmb/visrag"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/OpenBMB/VisRAG","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OpenBMB%2FVisRAG","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OpenBMB%2FVisRAG/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OpenBMB%2FVisRAG/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OpenBMB%2FVisRAG/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/OpenBMB","download_url":"https://codeload.github.com/OpenBMB/VisRAG/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OpenBMB%2FVisRAG/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":278463509,"owners_count":25991175,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-10-05T02:00:06.059Z","response_time":54,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["document-retrieval","document-understanding","multi-modal","multi-modality","rag","retrieval","retrieval-augmented-generation","vision-language-model"],"created_at":"2025-01-24T07:16:10.890Z","updated_at":"2025-10-05T13:31:40.743Z","avatar_url":"https://github.com/OpenBMB.png","language":"Python","readme":"# VisRAG: Vision-based Retrieval-augmented Generation on Multi-modality Documents\n[![Github](https://img.shields.io/badge/VisRAG-000000?style=for-the-badge\u0026logo=github\u0026logoColor=000\u0026logoColor=white)](https://github.com/OpenBMB/VisRAG)\n[![Google Colab](https://img.shields.io/badge/VisRAG_Pipeline-ffffff?style=for-the-badge\u0026logo=googlecolab\u0026logoColor=f9ab00)](https://colab.research.google.com/drive/11KV9adDNXPfHiuFAfXNOvtYJKcyR8JZH?usp=sharing)\n[![arXiv](https://img.shields.io/badge/arXiv-2410.10594-ff0000.svg?style=for-the-badge)](https://arxiv.org/abs/2410.10594)\n[![Hugging Face](https://img.shields.io/badge/VisRAG_Ret-fcd022?style=for-the-badge\u0026logo=huggingface\u0026logoColor=000)](https://huggingface.co/openbmb/VisRAG-Ret)\n[![Hugging Face](https://img.shields.io/badge/VisRAG_Collection-fcd022?style=for-the-badge\u0026logo=huggingface\u0026logoColor=000)](https://huggingface.co/collections/openbmb/visrag-6717bbfb471bb018a49f1c69)\n[![Hugging Face](https://img.shields.io/badge/VisRAG_Pipeline-fcd022?style=for-the-badge\u0026logo=huggingface\u0026logoColor=000)](https://huggingface.co/spaces/tcy6/VisRAG_Pipeline)\n\n\u003cp align=\"center\"\u003e•\n \u003ca href=\"#-introduction\"\u003e 📖 Introduction \u003c/a\u003e •\n \u003ca href=\"#-news\"\u003e🎉 News\u003c/a\u003e •\n \u003ca href=\"#-visrag-pipeline\"\u003e✨ VisRAG Pipeline\u003c/a\u003e •\n \u003ca href=\"#%EF%B8%8F-setup\"\u003e⚙️ Setup\u003c/a\u003e •\n \u003ca href=\"#%EF%B8%8F-training\"\u003e⚡️ Training\u003c/a\u003e \n\u003c/p\u003e\n\u003cp align=\"center\"\u003e•\n \u003ca href=\"#-evaluation\"\u003e📃 Evaluation\u003c/a\u003e •\n \u003ca href=\"#-usage\"\u003e🔧 Usage\u003c/a\u003e •\n \u003ca href=\"#-license\"\u003e📄 Lisense\u003c/a\u003e •\n \u003ca href=\"#-contact\"\u003e📧 Contact\u003c/a\u003e •\n \u003ca href=\"#-star-history\"\u003e📈 Star History\u003c/a\u003e\n\u003c/p\u003e\n\n# 📖 Introduction\n**VisRAG** is a novel vision-language model (VLM)-based RAG pipeline. In this pipeline, instead of first parsing the document to obtain text, the document is directly embedded using a VLM as an image and then retrieved to enhance the generation of a VLM. Compared to traditional text-based RAG, **VisRAG** maximizes the retention and utilization of the data information in the original documents, eliminating the information loss introduced during the parsing process.\n\u003cp align=\"center\"\u003e\u003cimg width=800 src=\"assets/main_figure.png\"/\u003e\u003c/p\u003e\n\n# 🎉 News\n\n* 20241111: Released our [VisRAG Pipeline](https://github.com/OpenBMB/VisRAG/tree/master/scripts/demo/visrag_pipeline) on GitHub, now supporting visual understanding across multiple PDF documents.\n* 20241104: Released our [VisRAG Pipeline](https://huggingface.co/spaces/tcy6/VisRAG_Pipeline) on Hugging Face Space.\n* 20241031: Released our [VisRAG Pipeline](https://colab.research.google.com/drive/11KV9adDNXPfHiuFAfXNOvtYJKcyR8JZH?usp=sharing) on Colab. Released codes for converting files to images which could be found at `scripts/file2img`.\n* 20241015: Released our train data and test data on Hugging Face which can be found in the [VisRAG](https://huggingface.co/collections/openbmb/visrag-6717bbfb471bb018a49f1c69) Collection on Hugging Face. It is referenced at the beginning of this page.\n* 20241014: Released our [Paper](https://arxiv.org/abs/2410.10594) on arXiv. Released our [Model](https://huggingface.co/openbmb/VisRAG-Ret) on Hugging Face. Released our [Code](https://github.com/OpenBMB/VisRAG) on GitHub.\n\n# ✨ VisRAG Pipeline\n\n## VisRAG-Ret\n\n**VisRAG-Ret** is a document embedding model built on [MiniCPM-V 2.0](https://huggingface.co/openbmb/MiniCPM-V-2), a vision-language model that integrates [SigLIP](https://huggingface.co/google/siglip-so400m-patch14-384) as the vision encoder and [MiniCPM-2B](https://huggingface.co/openbmb/MiniCPM-2B-sft-bf16) as the language model.\n\n## VisRAG-Gen\n\nIn the paper, We use MiniCPM-V 2.0, MiniCPM-V 2.6 and GPT-4o as the generators. Actually, you can use any VLMs you like!\n\n# ⚙️ Setup\n\n```bash\ngit clone https://github.com/OpenBMB/VisRAG.git\nconda create --name VisRAG python==3.10.8\nconda activate VisRAG\nconda install nvidia/label/cuda-11.8.0::cuda-toolkit\ncd VisRAG\npip install -r requirements.txt\npip install -e .\ncd timm_modified\npip install -e .\ncd ..\n```\nNote:\n1. `timm_modified` is an enhanced version of the `timm` library that supports gradient checkpointing, which we use in our training process to reduce memory usage.\n\n# ⚡️ Training\n\n## VisRAG-Ret\n\nOur training dataset of 362,110 Query-Document (Q-D) Pairs for **VisRAG-Ret** is comprised of train sets of openly available academic datasets (34%) and a synthetic dataset made up of pages from web-crawled PDF documents and augmented with VLM-generated (GPT-4o) pseudo-queries (66%). \n\n```bash\nbash scripts/train_retriever/train.sh 2048 16 8 0.02 1 true false config/deepspeed.json 1e-5 false wmean causal 1 true 2 false \u003cmodel_dir\u003e \u003crepo_name_or_path\u003e\n```\nNote:\n1. Our training data can be found in the `VisRAG` collection on Hugging Face, referenced at the beginning of this page. Please note that we have separated the `In-domain-data` and `Synthetic-data` due to their distinct differences. If you wish to train with the complete dataset, you’ll need to merge and shuffle them manually.\n2. The parameters listed above are those used in our paper and can be used to reproduce the results.\n3. `\u003crepo_name_or_path\u003e` can be any of the following: `openbmb/VisRAG-Ret-Train-In-domain-data`, `openbmb/VisRAG-Ret-Train-Synthetic-data`, the directory path of a repository downloaded from `Hugging Face`, or the directory containing your own training data.\n4. If you wish to train using your own datasets, remove the `--from_hf_repo` line from the `train.sh` script. Additionally, ensure that your dataset directory contains a `metadata.json` file, which must include a `length` field specifying the total number of samples in the dataset.\n5. Our training framework is modified based on [OpenMatch](https://github.com/OpenMatch/OpenMatch).\n\n## VisRAG-Gen\n\nThe generation part does not use any fine-tuning, we directly use off-the-shelf LLMs/VLMs for generation.\n\n# 📃 Evaluation\n\n## VisRAG-Ret\n```bash\nbash scripts/eval_retriever/eval.sh 512 2048 16 8 wmean causal ArxivQA,ChartQA,MP-DocVQA,InfoVQA,PlotQA,SlideVQA \u003cckpt_path\u003e\n```\n\nNote: \n1. Our test data can be found in the `VisRAG` Collection on Hugging Face, which is referenced at the beginning of this page.\n2. The parameters listed above are those used in our paper and can be used to reproduce the results.\n3. The evaluation script is configured to use datasets from Hugging Face by default. If you prefer to evaluate using locally downloaded dataset repositories, you can modify the `CORPUS_PATH`, `QUERY_PATH`, `QRELS_PATH` variables in the evaluation script to point to the local repository directory.\n\n## VisRAG-Gen\nThere are three settings in our generation: text-based generation, single-image-VLM-based generation and multi-image-VLM-based generation. Under single-image-VLM-based generation, there are two additional settings: page concatenation and weighted selection. For detailed information about these settings, please refer to our paper.\n```bash\npython scripts/generate/generate.py \\\n--model_name \u003cmodel_name\u003e \\\n--model_name_or_path \u003cmodel_path\u003e \\\n--dataset_name \u003cdataset_name\u003e \\\n--dataset_name_or_path \u003cdataset_path\u003e \\\n--rank \u003cprocess_rank\u003e \\ \n--world_size \u003cworld_size\u003e \\\n--topk \u003cnumber of docs retrieved for generation\u003e \\\n--results_root_dir \u003cretrieval_results_dir\u003e \\\n--task_type \u003ctask_type\u003e \\\n--concatenate_type \u003cimage_concatenate_type\u003e \\\n--output_dir \u003coutput_dir\u003e\n```\nNote:\n1. `use_positive_sample` determines whether to use only the positive document for the query. Enable this to exclude retrieved documents and omit `topk` and `results_root_dir`. If disabled, you must specify `topk` (number of retrieved documents) and organize `results_root_dir` as `results_root_dir/dataset_name/*.trec`.\n2. `concatenate_type` is only needed when `task_type` is set to `page_concatenation`. Omit this if not required.\n3. Always specify `model_name_or_path`, `dataset_name_or_path`, and `output_dir`.\n4. Use `--openai_api_key` only if GPT-based evaluation is needed.\n\n# 🔧 Usage\n\n## VisRAG-Ret\n\nModel on Hugging Face: https://huggingface.co/openbmb/VisRAG-Ret\n\n```python\nfrom transformers import AutoModel, AutoTokenizer\nimport torch\nimport torch.nn.functional as F\nfrom PIL import Image\nimport os\n\ndef weighted_mean_pooling(hidden, attention_mask):\n    attention_mask_ = attention_mask * attention_mask.cumsum(dim=1)\n    s = torch.sum(hidden * attention_mask_.unsqueeze(-1).float(), dim=1)\n    d = attention_mask_.sum(dim=1, keepdim=True).float()\n    reps = s / d\n    return reps\n\n@torch.no_grad()\ndef encode(text_or_image_list):\n    \n    if (isinstance(text_or_image_list[0], str)):\n        inputs = {\n            \"text\": text_or_image_list,\n            'image': [None] * len(text_or_image_list),\n            'tokenizer': tokenizer\n        }\n    else:\n        inputs = {\n            \"text\": [''] * len(text_or_image_list),\n            'image': text_or_image_list,\n            'tokenizer': tokenizer\n        }\n    outputs = model(**inputs)\n    attention_mask = outputs.attention_mask\n    hidden = outputs.last_hidden_state\n\n    reps = weighted_mean_pooling(hidden, attention_mask)   \n    embeddings = F.normalize(reps, p=2, dim=1).detach().cpu().numpy()\n    return embeddings\n\nmodel_name_or_path = \"openbmb/VisRAG-Ret\"\ntokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=True)\nmodel = AutoModel.from_pretrained(model_name_or_path, torch_dtype=torch.bfloat16, trust_remote_code=True)\nmodel.eval()\n\nscript_dir = os.path.dirname(os.path.realpath(__file__))\nqueries = [\"What does a dog look like?\"]\npassages = [\n    Image.open(os.path.join(script_dir, 'test_image/cat.jpeg')).convert('RGB'),\n    Image.open(os.path.join(script_dir, 'test_image/dog.jpg')).convert('RGB'),\n]\n\nINSTRUCTION = \"Represent this query for retrieving relevant documents: \"\nqueries = [INSTRUCTION + query for query in queries]\n\nembeddings_query = encode(queries)\nembeddings_doc = encode(passages)\n\nscores = (embeddings_query @ embeddings_doc.T)\nprint(scores.tolist())\n```\n## VisRAG-Gen\nFor `VisRAG-Gen`, you can explore the `VisRAG Pipeline` on Google Colab which includes both `VisRAG-Ret` and `VisRAG-Gen` to try out this simple demonstration.\n\n\n# 📄 License\n\n* The code in this repo is released under the [Apache-2.0](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE) License. \n* The usage of **VisRAG-Ret** model weights must strictly follow [MiniCPM Model License.md](https://github.com/OpenBMB/MiniCPM/blob/main/MiniCPM%20Model%20License.md).\n* The models and weights of **VisRAG-Ret** are completely free for academic research. After filling out a [\"questionnaire\"](https://modelbest.feishu.cn/share/base/form/shrcnpV5ZT9EJ6xYjh3Kx0J6v8g) for registration, **VisRAG-Ret** weights are also available for free commercial use.\n\n# 📧 Contact\n\n- Shi Yu: yus21@mails.tsinghua.edu.cn\n- Chaoyue Tang: tcy006@gmail.com\n\n# 📈 Star History\n\n\u003ca href=\"https://star-history.com/#openbmb/VisRAG\u0026Date\"\u003e\n \u003cpicture\u003e\n   \u003csource media=\"(prefers-color-scheme: dark)\" srcset=\"https://api.star-history.com/svg?repos=openbmb/VisRAG\u0026type=Date\u0026theme=dark\" /\u003e\n   \u003csource media=\"(prefers-color-scheme: light)\" srcset=\"https://api.star-history.com/svg?repos=openbmb/VisRAG\u0026type=Date\" /\u003e\n   \u003cimg alt=\"Star History Chart\" src=\"https://api.star-history.com/svg?repos=openbmb/VisRAG\u0026type=Date\" /\u003e\n \u003c/picture\u003e\n\u003c/a\u003e\n","funding_links":[],"categories":["2024.10"],"sub_categories":["VisRAG【火眼金睛】"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fopenbmb%2Fvisrag","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fopenbmb%2Fvisrag","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fopenbmb%2Fvisrag/lists"}