Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/SkalskiP/awesome-foundation-and-multimodal-models

πŸ‘οΈ + πŸ’¬ + 🎧 = πŸ€– Curated list of top foundation and multimodal models! [Paper + Code + Examples + Tutorials]
https://github.com/SkalskiP/awesome-foundation-and-multimodal-models

List: awesome-foundation-and-multimodal-models

blip clip computer-vision foundational-models grounding-dino image-captioning llava multimodal nlp open-vocabulary-detection open-vocabulary-segmentation segment-anything zero-shot-detection

Last synced: about 1 month ago
JSON representation

πŸ‘οΈ + πŸ’¬ + 🎧 = πŸ€– Curated list of top foundation and multimodal models! [Paper + Code + Examples + Tutorials]

Awesome Lists containing this project

README

        

awesome foundation and multimodal models

## πŸ‘οΈ + πŸ’¬ + 🎧 = πŸ€–

**foundation model** - a pre-trained machine learning model that serves as a base for a wide range of downstream tasks. It captures general knowledge from a large dataset and can be fine-tuned to perform specific tasks more effectively.

**multimodal model** - a model that can process multiple modalities (e.g. text, image,
video, audio, etc.) at the same time.

## πŸ€– models

### YOLO-World: Real-Time Open-Vocabulary Object Detection
[![arXiv](https://img.shields.io/badge/arXiv-2401.17270-b31b1b.svg)](https://arxiv.org/abs/2401.17270) [![GitHub](https://badges.aleen42.com/src/github.svg)](https://github.com/AILab-CVC/YOLO-World) [![YouTube](https://badges.aleen42.com/src/youtube.svg)](https://youtu.be/X7gKBGVz4vs) [![Hugging Face Space](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/SkalskiP/YOLO-World) [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/zero-shot-object-detection-with-yolo-world.ipynb)

Tianheng Cheng, Lin Song, Yixiao Ge, Wenyu Liu, Xinggang Wang, Ying Shan
- **Date:** 2024-01-30
- **Modalities:** πŸ‘οΈ + πŸ’¬
- **Tasks:** Zero-Shot Object Detection

### Depth Anything
[![arXiv](https://img.shields.io/badge/arXiv-2401.10891-b31b1b.svg)](https://arxiv.org/abs/2401.10891) [![GitHub](https://badges.aleen42.com/src/github.svg)](https://github.com/LiheYoung/Depth-Anything) [![Hugging Face Space](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/LiheYoung/Depth-Anything) [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/LiheYoung/depth_anything_vitl14) [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/Depth%20Anything/Predicting_depth_in_an_image_with_Depth_Anything.ipynb)

Lihe Yang, Bingyi Kang, Zilong Huang, Xiaogang Xu, Jiashi Feng, Hengshuang Zhao
- **Date:** 2024-01-19
- **Modalities:** πŸ‘
- **Tasks:** Depth Estimation

### EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment Anything
[![arXiv](https://img.shields.io/badge/arXiv-2312.00863-b31b1b.svg)](https://arxiv.org/abs/2312.00863) [![GitHub](https://badges.aleen42.com/src/github.svg)](https://github.com/yformer/EfficientSAM) [![Hugging Face Space](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/SkalskiP/EfficientSAM) [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/merve/EfficientSAM)

Yunyang Xiong, Bala Varadarajan, Lemeng Wu, Xiaoyu Xiang, Fanyi Xiao, Chenchen Zhu, Xiaoliang Dai, Dilin Wang, Fei Sun, Forrest Iandola, Raghuraman Krishnamoorthi, Vikas Chandra
- **Date:** 2023-12-01
- **Modalities:** πŸ‘οΈ
- **Tasks:** Zero-Shot Object Segmentation

### Qwen-VL-Plus / Max
[![arXiv](https://img.shields.io/badge/arXiv-2308.12966-b31b1b.svg)](https://arxiv.org/abs/2308.12966) [![GitHub](https://badges.aleen42.com/src/github.svg)](https://github.com/QwenLM/Qwen-VL#qwen-vl-plus) [![Hugging Face Space](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/Qwen/Qwen-VL-Plus) [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/Qwen/Qwen-VL)

Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou
- **Date:** 2023-11-28
- **Modalities:** πŸ‘οΈ + πŸ’¬
- **Tasks:** Image Captioning, VQA, Zero-Shot Object Detection

### CogVLM: Visual Expert for Pretrained Language Models
[![arXiv](https://img.shields.io/badge/arXiv-2311.03079-b31b1b.svg)](https://arxiv.org/abs/2311.03079) [![GitHub](https://badges.aleen42.com/src/github.svg)](https://github.com/THUDM/CogVLM) [![Hugging Face Space](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/lykeven/CogVLM) [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/THUDM/CogVLM)

Weihan Wang, Qingsong Lv, Wenmeng Yu, Wenyi Hong, Ji Qi, Yan Wang, Junhui Ji, Zhuoyi Yang, Lei Zhao, Xixuan Song, Jiazheng Xu, Bin Xu, Juanzi Li, Yuxiao Dong, Ming Ding, Jie Tang
- **Date:** 2023-11-06
- **Modalities:** πŸ‘οΈ + πŸ’¬
- **Tasks:** Image Captioning, VQA

### Fuyu-8B: A Multimodal Architecture for AI Agents
[![Hugging Face Space](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/adept/fuyu-8b-demo) [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/adept/fuyu-8b)

Rohan Bavishi, Erich Elsen, Curtis Hawthorne, Maxwell Nye, Augustus Odena, Arushi Somani, Sağnak Taşırlar
- **Date:** 2023-10-17
- **Modalities:** πŸ‘οΈ + πŸ’¬
- **Tasks:** Image Classification, Image Captioning, VQA, Find Text in Image

### Ferret: Refer and Ground Anything Anywhere at Any Granularity
[![arXiv](https://img.shields.io/badge/arXiv-2310.07704-b31b1b.svg)](https://arxiv.org/abs/2310.07704) [![GitHub](https://badges.aleen42.com/src/github.svg)](https://github.com/apple/ml-ferret)

Haoxuan You, Haotian Zhang, Zhe Gan, Xianzhi Du, Bowen Zhang, Zirui Wang, Liangliang Cao, Shih-Fu Chang, Yinfei Yang
- **Date:** 2023-10-11
- **Modalities:** πŸ‘οΈ + πŸ’¬
- **Tasks:** Image Captioning, VQA, Phrase Grounding, Object Detection

### MetaCLIP: Demystifying CLIP Data
[![arXiv](https://img.shields.io/badge/arXiv-2309.16671-b31b1b.svg)](https://arxiv.org/abs/2309.16671) [![GitHub](https://badges.aleen42.com/src/github.svg)](https://github.com/facebookresearch/MetaCLIP) [![Hugging Face Space](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/SkalskiP/SAM_and_MetaCLIP) [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/facebook/metaclip-b32-400m) [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1V0Rv1QQJkcolTjiwJuRsqWycROvYjOwg?usp=sharing)

Hu Xu, Saining Xie, Xiaoqing Ellen Tan, Po-Yao Huang, Russell Howes, Vasu Sharma, Shang-Wen Li, Gargi Ghosh, Luke Zettlemoyer, Christoph Feichtenhofer
- **Date:** 2023-09-28
- **Modalities:** πŸ‘οΈ + πŸ’¬
- **Tasks:** Zero-Shot Classification

### Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
[![arXiv](https://img.shields.io/badge/arXiv-2308.12966-b31b1b.svg)](https://arxiv.org/abs/2308.12966) [![GitHub](https://badges.aleen42.com/src/github.svg)](https://github.com/QwenLM/Qwen-VL) [![Hugging Face Space](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/Qwen/Qwen-VL-Max) [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/Qwen/Qwen-VL)

Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou
- **Date:** 2023-09-24
- **Modalities:** πŸ‘οΈ + πŸ’¬
- **Tasks:** Image Captioning, VQA

### SigLIP: Sigmoid Loss for Language Image Pre-Training
[![arXiv](https://img.shields.io/badge/arXiv-2303.15343-b31b1b.svg)](https://arxiv.org/abs/2303.15343) [![GitHub](https://badges.aleen42.com/src/github.svg)](https://github.com/google-research/big_vision) [![Hugging Face Space](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/merve/compare_clip_siglip) [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/openai/clip-vit-base-patch16) [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/SigLIP/Inference_with_(multilingual)_SigLIP%2C_a_better_CLIP_model.ipynb)

Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, Lucas Beyer
- **Date:** 2023-08-27
- **Modalities:** πŸ‘οΈπŸ’¬
- **Tasks:** Zero-Shot Image Classification

### Nougat: Neural Optical Understanding for Academic Documents
[![arXiv](https://img.shields.io/badge/arXiv-2308.13418-b31b1b.svg)](https://arxiv.org/abs/2308.13418) [![GitHub](https://badges.aleen42.com/src/github.svg)](https://github.com/facebookresearch/nougat) [![Hugging Face Space](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/hf-vision/nougat-transformers) [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/facebook/nougat-small) [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/Nougat/Inference_with_Nougat_to_read_scientific_PDFs.ipynb)

Lukas Blecher, Guillem Cucurull, Thomas Scialom, Robert Stojnic
- **Date:** 2023-08-25
- **Modalities:** πŸ‘οΈπŸ’¬
- **Tasks:** Visual Question Answering

### AudioLDM 2: Learning Holistic Audio Generation with Self-supervised Pretraining
[![arXiv](https://img.shields.io/badge/arXiv-2308.05734-b31b1b.svg)](https://arxiv.org/abs/2308.05734) [![GitHub](https://badges.aleen42.com/src/github.svg)](https://github.com/haoheliu/AudioLDM2) [![Hugging Face Space](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/haoheliu/AudioLDM_48K_Text-to-HiFiAudio_Generation) [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/spaces/haoheliu/audioldm2-text2audio-text2music)

Haohe Liu, Qiao Tian, Yi Yuan, Xubo Liu, Xinhao Mei, Qiuqiang Kong, Yuping Wang, Wenwu Wang, Yuxuan Wang, Mark D. Plumbley
- **Date:** 2023-08-10
- **Modalities:** πŸ’¬οΈ + 🎧
- **Tasks:** Text-to-Audio, Text-to-Speech

### OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
[![arXiv](https://img.shields.io/badge/arXiv-2308.01390-b31b1b.svg)](https://arxiv.org/abs/2308.01390) [![GitHub](https://badges.aleen42.com/src/github.svg)](https://github.com/mlfoundations/open_flamingo) [![Hugging Face Space](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/openflamingo/OpenFlamingo) [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/openflamingo/OpenFlamingo-9B-vitl-mpt7b)

Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt
- **Date:** 2023-08-02
- **Modalities:** πŸ‘οΈ + πŸ’¬
- **Tasks:** Image Classification, Image Captioning, VQA

### Kosmos-2: Grounding Multimodal Large Language Models to the World
[![arXiv](https://img.shields.io/badge/arXiv-2306.14824-b31b1b.svg)](https://arxiv.org/abs/2306.14824) [![GitHub](https://badges.aleen42.com/src/github.svg)](https://github.com/microsoft/unilm/tree/master/kosmos-2) [![Hugging Face Space](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/ydshieh/Kosmos-2) [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/microsoft/kosmos-2-patch14-224)

Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, Furu Wei
- **Date:** 2023-07-26
- **Modalities:** πŸ‘οΈ + πŸ’¬
- **Tasks:** Image Captioning, VQA, Phrase Grounding

### OWLv2: Scaling Open-Vocabulary Object Detection
[![arXiv](https://img.shields.io/badge/arXiv-2306.09683-b31b1b.svg)](https://arxiv.org/abs/2306.09683) [![GitHub](https://badges.aleen42.com/src/github.svg)](https://github.com/google-research/scenic/tree/main/scenic/projects/owl_vit) [![Hugging Face Space](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/merve/owlv2) [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/google/owlv2-base-patch16-ensemble) [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/NielsRogge/Transformers-Tutorials/blob/master/OWLv2/Zero_and_one_shot_object_detection_with_OWLv2.ipynb)

Matthias Minderer, Alexey Gritsenko, Neil Houlsby
- **Date:** 2023-06-17
- **Modalities:** πŸ‘οΈ
- **Tasks:** Zero-Shot Object Detection

### ImageBind: One Embedding Space To Bind Them All
[![arXiv](https://img.shields.io/badge/arXiv-2305.05665-b31b1b.svg)](https://arxiv.org/abs/2305.05665) [![GitHub](https://badges.aleen42.com/src/github.svg)](https://github.com/facebookresearch/ImageBind) [![Hugging Face Space](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/JustinLin610/ImageBind_zeroshot_demo)

Rohit Girdhar, Alaaeldin El-Nouby, Zhuang Liu, Mannat Singh, Kalyan Vasudev Alwala, Armand Joulin, Ishan Misra
- **Date:** 2023-05-09
- **Modalities:** πŸ‘οΈ + πŸ’¬ + 🎧
- **Tasks:**

### LLaVA: Large Language and Vision Assistant
[![arXiv](https://img.shields.io/badge/arXiv-2304.08485-b31b1b.svg)](https://arxiv.org/abs/2304.08485) [![GitHub](https://badges.aleen42.com/src/github.svg)](https://github.com/haotian-liu/LLaVA) [![Hugging Face Space](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/badayvedat/LLaVA) [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/liuhaotian/llava-v1.6-34b)

Haotian Liu, Chunyuan Li, Qingyang Wu, Yong Jae Lee
- **Date:** 2023-04-17
- **Modalities:** πŸ‘οΈ + πŸ’¬
- **Tasks:** Vision Language Modeling

### Segment Anything
[![arXiv](https://img.shields.io/badge/arXiv-2304.02643-b31b1b.svg)](https://arxiv.org/abs/2304.02643) [![GitHub](https://badges.aleen42.com/src/github.svg)](https://github.com/facebookresearch/segment-anything) [![YouTube](https://badges.aleen42.com/src/youtube.svg)](https://youtu.be/D-D6ZmadzPE) [![Hugging Face Space](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/radames/candle-segment-anything-wasm) [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/facebook/sam-vit-base) [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/how-to-segment-anything-with-sam.ipynb)

Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C. Berg, Wan-Yen Lo, Piotr DollΓ‘r, Ross Girshick
- **Date:** 2023-04-05
- **Modalities:** πŸ‘οΈ
- **Tasks:** Zero-Shot Object Segmentation

### Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection
[![arXiv](https://img.shields.io/badge/arXiv-2303.05499-b31b1b.svg)](https://arxiv.org/abs/2303.05499) [![GitHub](https://badges.aleen42.com/src/github.svg)](https://github.com/IDEA-Research/GroundingDINO) [![YouTube](https://badges.aleen42.com/src/youtube.svg)](https://youtu.be/cMa77r3YrDk) [![Hugging Face Space](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/ShilongLiu/Grounding_DINO_demo) [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/spaces/merve/Grounding_DINO_demo) [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/zero-shot-object-detection-with-grounding-dino.ipynb)

Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, Lei Zhang
- **Date:** 2023-03-09
- **Modalities:** πŸ‘οΈ + πŸ’¬
- **Tasks:** Phrase Grounding, Zero-Shot Object Detection

### BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models
[![arXiv](https://img.shields.io/badge/arXiv-2301.12597-b31b1b.svg)](https://arxiv.org/abs/2301.12597) [![GitHub](https://badges.aleen42.com/src/github.svg)](https://github.com/salesforce/LAVIS/tree/main/projects/blip2) [![Hugging Face Space](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/merve/BLIP2-with-transformers) [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/Salesforce/blip2-opt-6.7b) [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/salesforce/LAVIS/blob/main/examples/blip2_instructed_generation.ipynb)

Junnan Li, Dongxu Li, Silvio Savarese, Steven Hoi
- **Date:** 2023-01-30
- **Modalities:** πŸ‘οΈ + πŸ’¬
- **Tasks:** Image Captioning, Visual Question Answering

### Whisper: Robust Speech Recognition via Large-Scale Weak Supervision
[![arXiv](https://img.shields.io/badge/arXiv-2212.04356-b31b1b.svg)](https://arxiv.org/abs/2212.04356) [![GitHub](https://badges.aleen42.com/src/github.svg)](https://github.com/openai/whisper) [![Hugging Face Space](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/openai/whisper) [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/openai/whisper-large-v3) [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/openai/whisper/blob/master/notebooks/LibriSpeech.ipynb)

Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever
- **Date:** 2022-12-06
- **Modalities:** πŸ’¬οΈ + 🎧
- **Tasks:** Speech-to-Text

### OWL-ViT: Simple Open-Vocabulary Object Detection with Vision Transformers
[![arXiv](https://img.shields.io/badge/arXiv-2205.06230-b31b1b.svg)](https://arxiv.org/abs/2205.06230) [![GitHub](https://badges.aleen42.com/src/github.svg)](https://github.com/google-research/scenic/tree/main/scenic/projects/owl_vit) [![Hugging Face Space](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/adirik/OWL-ViT) [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/google/owlvit-base-patch32)

Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, Neil Houlsby
- **Date:** 2022-05-12
- **Modalities:** πŸ‘οΈ + πŸ’¬
- **Tasks:** Zero-Shot Object Detection

### CLIP: Learning Transferable Visual Models From Natural Language Supervision
[![arXiv](https://img.shields.io/badge/arXiv-2103.00020-b31b1b.svg)](https://arxiv.org/abs/2103.00020) [![GitHub](https://badges.aleen42.com/src/github.svg)](https://github.com/openai/CLIP) [![Hugging Face Models](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue)](https://huggingface.co/openai/clip-vit-large-patch14) [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/how-to-use-openai-clip-classification.ipynb)

Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever
- **Date:** 2021-02-26
- **Modalities:** πŸ‘οΈ + πŸ’¬
- **Tasks:** Zero-Shot Classification

## 🦸 contribution

We would love your help in making this repository even better! If you know of an
amazing paper that isn't listed here, or if you have any suggestions for improvement,
feel free to open an [issue](https://github.com/SkalskiP/awesome-foundation-and-multimodal-models/issues)
or submit a [pull request](https://github.com/SkalskiP/awesome-foundation-and-multimodal-models/pulls).