Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/invictus717/MetaTransformer
Meta-Transformer for Unified Multimodal Learning
https://github.com/invictus717/MetaTransformer
artificial-intelligence computer-vision foundationmodel machine-learning multimedia multimodal transformers
Last synced: 2 days ago
JSON representation
Meta-Transformer for Unified Multimodal Learning
- Host: GitHub
- URL: https://github.com/invictus717/MetaTransformer
- Owner: invictus717
- License: apache-2.0
- Created: 2023-07-08T12:40:54.000Z (over 1 year ago)
- Default Branch: master
- Last Pushed: 2023-12-05T07:36:11.000Z (11 months ago)
- Last Synced: 2024-10-29T17:41:10.051Z (10 days ago)
- Topics: artificial-intelligence, computer-vision, foundationmodel, machine-learning, multimedia, multimodal, transformers
- Language: Python
- Homepage: https://arxiv.org/abs/2307.10802
- Size: 21.7 MB
- Stars: 1,513
- Watchers: 22
- Forks: 113
- Open Issues: 3
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-foundation-model - MetaTransformer
- StarryDivineSky - invictus717/MetaTransformer - Transformer 框架与多模态大型语言模型相结合,该模型执行多模态联合训练,支持更多模态,包括 fMRI、深度图和法线图,并在 25 个基准测试中展示了非常令人印象深刻的性能。作为基础模型,Meta-Transformer 可以处理来自 12 种模态的数据,这决定了它可以支持广泛的应用程序。如图所示,Meta-Transformer可以为下游任务提供服务,包括股票分析📈、天气预报❄️ ⛄ ☁️ ☔ ☀️ ⚡、遥感📡、自动驾驶🚗、社交网络🌍、语音识别🔉等。表 1:Meta-Transformer 能够处理多达 12 种模态,包括自然语言 、RGB 图像 、点云 、音频 、视频 、表格数据 、图形 、时间序列数据 、高光谱图像 、IMU 、医学图像 和红外图像 。此存储库旨在探索 transformer 在多模态学习中的潜力和可扩展性。我们利用 Transformer 的优势来处理长度变化序列。然后,我们按照元方案提出数据到序列的标记化,然后将其应用于 12 种模态,包括文本、图像、点云、音频、视频、红外、超光谱、X 射线、表格、图形、时间序列和惯性测量单元 (IMU) 数据。在获得令牌序列后,我们采用模态共享编码器来提取不同模态的表示。借助特定于任务的磁头,Meta-Transformer 可以处理不同模态的各种任务,例如:分类、检测和分割。 (多模态大模型 / 网络服务_其他)
README
Yiyuan Zhang1,2*
Kaixiong Gong1,2*
Kaipeng Zhang2,†
Hongsheng Li 1,2
Yu Qiao 2
Wanli Ouyang2
Xiangyu Yue1,†,‡
1
Multimedia Lab, The Chinese University of Hong Kong
2 OpenGVLab,Shanghai AI Laboratory
* Equal Contribution
† Corresponding Author
‡ Project Lead-----------------
[![arXiv](https://img.shields.io/badge/arxiv-2307.10802-b31b1b?style=plastic&color=b31b1b&link=https%3A%2F%2Farxiv.org%2Fabs%2F2307.10802)](https://arxiv.org/abs/2307.10802)
[![website](https://img.shields.io/badge/Project-Website-brightgreen)](https://kxgong.github.io/meta_transformer/)
[![blog-cn](https://img.shields.io/badge/%E6%9C%BA%E5%99%A8%E4%B9%8B%E5%BF%83-%E7%AE%80%E4%BB%8B-brightgreen)](https://mp.weixin.qq.com/s/r38bzqdJxDZUvtDI0c9CEw)
[![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Space-blue)](https://huggingface.co/papers/2307.10802)
[![OpenXLab](https://cdn-static.openxlab.org.cn/header/openxlab_models.svg)](https://openxlab.org.cn/models/detail/zhangyiyuan/MetaTransformer)
![](https://img.shields.io/github/stars/invictus717/MetaTransformer?style=social)
## Meta-Transformer with Large Language Models ✨✨✨
We're thrilled to present [OneLLM](https://github.com/csuhan/OneLLM), ensembling Meta-Transformer framework with Multimodal Large Language Models, which performs multimodal joint training🚀, supports more modalities including fMRI, Depth and Normal Maps 🚀, and demonstrates very impressive performances on **25** benchmarks🚀🚀🚀.
🔥🔥 The code, pretrained models, and datasets are publicly available at [OneLLM](https://github.com/csuhan/OneLLM).
🔥🔥 Project Website is at [OneLLM](https://onellm.csuhan.com/).
### 🌟 Single Foundation Model Supports A Wide Range of Applications
As a foundation model, Meta-Transformer can handle data from 12 modalities, which determines that it can support a wide range of applications. As shown in this figure, Meta-Transformer can provide services for downstream tasks including stock analysis 📈, weather forecasting ☀️ ☔ ☁️ ❄️ ⛄ ⚡, remote sensing 📡, autonomous driving 🚗, social network 🌍, speech recognition 🔉, etc.
**Table 1**: Meta-Transformer is capable of handling up to 12 modalities, including natural language , RGB images , point clouds , audios , videos , tabular data , graph , time series data , hyper-spectral images , IMU , medical images , and infrared images .
## 🚩🚩🚩 Shared-Encoder, Unpaired Data, More Modalities
This repository is built to explore the potential and extensibility of transformers for multimodal learning. We utilize the advantages of Transformers to deal with length-variant sequences. Then we propose the *Data-to-Sequence* tokenization following a meta-scheme, then we apply it to 12 modalities including text, image, point cloud, audio, video, infrared, hyper-spectral, X-Ray, tabular, graph, time-series, and Inertial Measurement Unit (IMU) data.
After obtaining the token sequence, we employ a modality-shared encoder to extract representation across different modalities. With task-specific heads, Meta-Transformer can handle various tasks on the different modalities, such as: classification, detection, and segmentation.
# 🌟 News
* **2023.8.17:** Release code to directly get embeddings from multiple modalities. We will further release code on utilizing Meta-Transformer for Human-Centric vision tasks.
* **2023.8.2:** 🎉🎉🎉 The implementation of Meta-Transformer for image, point cloud, graph, tabular, time-series, X-Ray, hyper-spectrum, LiDAR data has been released. We also release a very powerful foundation model for Autonomous Driving 🚀🚀🚀.
* **2023.7.22:** Pretrained weights and a usage demo for our Meta-Transformer have been released. Comprehensive documentation and implementation of the image modality are underway and will be released soon. Stay tuned for more exciting updates!⌛⌛⌛
* **2023.7.21:** Paper is released at [arxiv](https://arxiv.org/abs/2307.10802), and code will be gradually released.
* **2023.7.8:** Github Repository Initialization.# 🔓 Model Zoo
Open-source Modality-Agnostic Models
| Model | Pretraining | Scale | #Param | Download | 国内下载源 |
| :------------: | :----------: | :----------------------: | :----: | :---------------------------------------------------------------------------------------------------: | :--------: |
| Meta-Transformer-B16 | LAION-2B | Base | 85M | [ckpt](https://drive.google.com/file/d/19ahcN2QKknkir_bayhTW5rucuAiX0OXq/view?usp=sharing) | [ckpt](https://download.openxlab.org.cn/models/zhangyiyuan/MetaTransformer/weight//Meta-Transformer_base_patch16_encoder)
| Meta-Transformer-L14 | LAION-2B | Large | 302M | [ckpt](https://drive.google.com/file/d/15EtzCBAQSqmelhdLz6k880A19_RpcX9B/view?usp=drive_link) | [ckpt](https://download.openxlab.org.cn/models/zhangyiyuan/MetaTransformer/weight//Meta-Transformer_large_patch14_encoder)* Demo of Use for Pretrained Encoder
```python
import torch
import torch.nn as nn
from timm.models.vision_transformer import Block
from Data2Seq import Data2Seq
video_tokenier = Data2Seq(modality='video',dim=768)
audio_tokenier = Data2Seq(modality='audio',dim=768)
time_series_tokenier = Data2Seq(modality='time-series',dim=768)features = torch.concat([video_tokenizer(video), audio_tokenizer(audio), time_series_tokenizer(time_data)],dim=1)
# For base-scale encoder:
ckpt = torch.load("Meta-Transformer_base_patch16_encoder.pth")
encoder = nn.Sequential(*[
Block(
dim=768,
num_heads=12,
mlp_ratio=4.,
qkv_bias=True,
norm_layer=nn.LayerNorm,
act_layer=nn.GELU
)
for i in range(12)])
encoder.load_state_dict(ckpt,strict=True)
# For large-scale encoder:
ckpt = torch.load("Meta-Transformer_large_patch14_encoder.pth")
encoder = nn.Sequential(*[
Block(
dim=1024,
num_heads=16,
mlp_ratio=4.,
qkv_bias=True,
norm_layer=nn.LayerNorm,
act_layer=nn.GELU
)
for i in range(24)])
encoder.load_state_dict(ckpt,strict=True)
encoded_features = encoder(features)
```# 🕙 ToDo
- [ x ] Meta-Transformer with Large Language Models.
- [ x ] Multimodal Joint Training with Meta-Transformer.
- [ x ] Support More Modalities and More Tasks.# Contact
🚀🚀🚀 We aspire to shape this repository into **a formidable foundation for mainstream AI perception tasks across diverse modalities**. Your contributions can play a significant role in this endeavor, and we warmly welcome your participation in our project!To contact us, never hestitate to send an email to `[email protected]` ,`[email protected]`, `[email protected]`, or `[email protected]`!
# Citation
If the code and paper help your research, please kindly cite:
```
@article{zhang2023meta,
title={Meta-transformer: A unified framework for multimodal learning},
author={Zhang, Yiyuan and Gong, Kaixiong and Zhang, Kaipeng and Li, Hongsheng and Qiao, Yu and Ouyang, Wanli and Yue, Xiangyu},
journal={arXiv preprint arXiv:2307.10802},
year={2023}
}
```
# License
This project is released under the [Apache 2.0 license](LICENSE).
# Acknowledgement
This code is developed based on excellent open-sourced projects including [MMClassification](https://github.com/open-mmlab/mmpretrain/tree/mmcls-1.x), [MMDetection](https://github.com/open-mmlab/mmdetection), [MMsegmentation](https://github.com/open-mmlab/mmsegmentation), [OpenPoints](https://github.com/guochengqian/openpoints), [Time-Series-Library](https://github.com/thuml/Time-Series-Library), [Graphomer](https://github.com/microsoft/Graphormer), [SpectralFormer](https://github.com/danfenghong/IEEE_TGRS_SpectralFormer), and [ViT-Adapter](https://github.com/czczup/ViT-Adapter).