Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/OpenGVLab/M3I-Pretraining
[CVPR 2023] implementation of Towards All-in-one Pre-training via Maximizing Multi-modal Mutual Information.
https://github.com/OpenGVLab/M3I-Pretraining
Last synced: about 1 month ago
JSON representation
[CVPR 2023] implementation of Towards All-in-one Pre-training via Maximizing Multi-modal Mutual Information.
- Host: GitHub
- URL: https://github.com/OpenGVLab/M3I-Pretraining
- Owner: OpenGVLab
- Created: 2022-11-21T04:53:14.000Z (about 2 years ago)
- Default Branch: main
- Last Pushed: 2023-06-01T13:51:52.000Z (over 1 year ago)
- Last Synced: 2024-08-03T01:14:53.221Z (4 months ago)
- Homepage: https://arxiv.org/abs/2211.09807
- Size: 600 KB
- Stars: 91
- Watchers: 12
- Forks: 5
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- awesome-llm-and-aigc - M3I-Pretraining - Pretraining?style=social"/> : "Towards All-in-one Pre-training via Maximizing Multi-modal Mutual Information". (**[arXiv 2022](https://arxiv.org/abs/2211.09807)**). (Summary)
- awesome-llm-and-aigc - M3I-Pretraining - Pretraining?style=social"/> : "Towards All-in-one Pre-training via Maximizing Multi-modal Mutual Information". (**[arXiv 2022](https://arxiv.org/abs/2211.09807)**). (Summary)
README
# M3I Pre-training
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/towards-all-in-one-pre-training-via/object-detection-on-coco)](https://paperswithcode.com/sota/object-detection-on-coco?p=towards-all-in-one-pre-training-via)
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/towards-all-in-one-pre-training-via/object-detection-on-coco-minival)](https://paperswithcode.com/sota/object-detection-on-coco-minival?p=towards-all-in-one-pre-training-via)
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/towards-all-in-one-pre-training-via/object-detection-on-lvis-v1-0-minival)](https://paperswithcode.com/sota/object-detection-on-lvis-v1-0-minival?p=towards-all-in-one-pre-training-via)
[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/towards-all-in-one-pre-training-via/semantic-segmentation-on-ade20k)](https://paperswithcode.com/sota/semantic-segmentation-on-ade20k?p=towards-all-in-one-pre-training-via) [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/towards-all-in-one-pre-training-via/image-classification-on-imagenet)](https://paperswithcode.com/sota/image-classification-on-imagenet?p=towards-all-in-one-pre-training-via)
This repository is an official implementation of CVPR 2023 paper [Towards All-in-one Pre-training via Maximizing Multi-modal Mutual Information](https://arxiv.org/abs/2211.09807).
By [Weijie Su](https://scholar.google.com/citations?user=ECDe6IIAAAAJ&hl=en), [Xizhou Zhu](https://scholar.google.com/citations?user=02RXI00AAAAJ&hl=en), [Chenxin Tao](https://scholar.google.com/citations?user=sXHFIBkAAAAJ&hl=en), [Lewei Lu](https://scholar.google.com/citations?user=zdgKJXIAAAAJ&hl=en), [Bin Li](http://staff.ustc.edu.cn/~binli/), [Gao Huang](http://www.gaohuang.net/), [Yu Qiao](https://scholar.google.com/citations?user=gFtI-8QAAAAJ&hl=en), [Xiaogang Wang](https://scholar.google.com/citations?user=-B5JgjsAAAAJ&hl=en), [Jie Zhou](https://scholar.google.com/citations?user=6a79aPwAAAAJ&hl=en), [Jifeng Dai](https://jifengdai.org/).
Code will be available.
## Introduction
**M**aximizing **M**ulti-modal **M**utual **I**nformation Pre-training (**M3I Pre-training**), initially described in [arxiv](https://arxiv.org/abs/2211.09807), is a simple yet effective one-stage pre-training paradigm. It can integrate existing pre-training methods (supervised pre-training, weakly-supervised pre-training and self-supervised pre-training) under an unified mutual information perspective and maintain all desired properties through a single-stage pre-training. Notably, we successfully pre-train a 1B model ([InternImage-H](https://arxiv.org/abs/2211.05778)) with M3I Pre-training and achieve new record `65.4 mAP` on COCO detection test-dev, `62.5 mAP` on LVIS detection minival, and `62.9 mIoU` on ADE20k.
## Citation
If this work is helpful for your research, please consider citing the following BibTeX entry.
```
@InProceedings{Su_2023_CVPR,
author = {Su, Weijie and Zhu, Xizhou and Tao, Chenxin and Lu, Lewei and Li, Bin and Huang, Gao and Qiao, Yu and Wang, Xiaogang and Zhou, Jie and Dai, Jifeng},
title = {Towards All-in-One Pre-Training via Maximizing Multi-Modal Mutual Information},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2023},
pages = {15888-15899}
}
```