Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/OpenGVLab/Multi-Modality-Arena
Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing images as inputs. Supports MiniGPT-4, LLaMA-Adapter V2, LLaVA, BLIP-2, and many more!
https://github.com/OpenGVLab/Multi-Modality-Arena
chat chatbot chatgpt gradio large-language-models llms multi-modality vision-language-model vqa
Last synced: about 1 month ago
JSON representation
Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing images as inputs. Supports MiniGPT-4, LLaMA-Adapter V2, LLaVA, BLIP-2, and many more!
- Host: GitHub
- URL: https://github.com/OpenGVLab/Multi-Modality-Arena
- Owner: OpenGVLab
- Created: 2023-05-10T09:26:08.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-04-21T11:14:46.000Z (8 months ago)
- Last Synced: 2024-08-01T13:17:45.125Z (4 months ago)
- Topics: chat, chatbot, chatgpt, gradio, large-language-models, llms, multi-modality, vision-language-model, vqa
- Language: Python
- Homepage:
- Size: 21.5 MB
- Stars: 421
- Watchers: 6
- Forks: 30
- Open Issues: 17
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- awesome-multimodal-in-medical-imaging - OmniMedVQA
- awesome-llm-eval - LVLM-eHub - Modality Arena"是一个用于大型多模态模型的评估平台。在Fastchat之后,两个匿名模型在视觉问答任务上进行并排比较,"Multi-Modality Arena"允许你在提供图像输入的同时,对视觉-语言模型进行并排基准测试。支持MiniGPT-4,LLaMA-Adapter V2,LLaVA,BLIP-2等多种模型 | (Datasets-or-Benchmark / 多模态-跨模态)