Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/haofanwang/awesome-mlp-papers

Recent Advances in MLP-based Models (MLP is all you need!)
https://github.com/haofanwang/awesome-mlp-papers

List: awesome-mlp-papers

attention fnet mlp mlp-mixer resmlp

Last synced: about 1 month ago
JSON representation

Recent Advances in MLP-based Models (MLP is all you need!)

Awesome Lists containing this project

README

        

# awesome-mlp-papers [![Awesome](https://awesome.re/badge.svg)](https://awesome.re)
An up-to-date list of MLP-based paper without attention! Maintained by [Haofan Wang](https://haofanwang.github.io/) ([email protected]).

## MLP is all you Need (The topper, the newer!)

[UNeXt: MLP-based Rapid Medical Image Segmentation Network](https://arxiv.org/abs/2203.04967), Johns Hopkins University, [code](https://github.com/jeya-maria-jose/UNeXt-pytorch), MICCAI 2022.

[MotionMixer: MLP-based 3D Human Body Pose Forecasting](https://www.ijcai.org/proceedings/2022/0111.pdf), Mercedes-Benz AG, [code](https://github.com/MotionMLP/MotionMixer), IJCAI 2022 Oral.

[MLP-3D: A MLP-like 3D Architecture with Grouped Time Mixing](https://openaccess.thecvf.com/content/CVPR2022/html/Qiu_MLP-3D_A_MLP-Like_3D_Architecture_With_Grouped_Time_Mixing_CVPR_2022_paper.html), JD AI Research, CVPR 2022

[MAXIM: Multi-Axis MLP for Image Processing](https://arxiv.org/pdf/2201.02973.pdf), Google Research, UT-Austin, 2022

[ConvMLP: Hierarchical Convolutional MLPs for Vision](https://arxiv.org/pdf/2109.04454.pdf), University of Oregon, 2021

[Axial-MLP for automatic segmentation of choroid plexus in multiple sclerosis](https://arxiv.org/pdf/2109.03778.pdf), Paris Brain Institute - Inria, 2021

[Sparse-MLP: A Fully-MLP Architecture with Conditional Computation](https://arxiv.org/abs/2109.02008), NUS, 2021

[Hire-MLP: Vision MLP via Hierarchical Rearrangement](https://arxiv.org/abs/2108.13341), Noah’s Ark Lab, Huawei Technologies, 2021

[RaftMLP: Do MLP-based Models Dream of Winning Over Computer Vision?](https://arxiv.org/abs/2108.04384), Rikkyo University, 2021

[S2-MLPv2: Improved Spatial-Shift MLP Architecture for Vision](https://arxiv.org/abs/2108.01072), Baidu Research, 2021

[CycleMLP: A MLP-like Architecture for Dense Prediction](https://arxiv.org/abs/2107.10224), The University of Hong Kong, 2021, [[code](https://github.com/ShoufaChen/CycleMLP)]

[AS-MLP: An Axial Shifted MLP Architecture for Vision](https://arxiv.org/abs/2107.08391), ShanghaiTech University, 2021

[PC-MLP: Model-based Reinforcement Learning with Policy Cover Guided Exploration](https://arxiv.org/abs/2107.07410), CMU, ICML 2021

[Global Filter Networks for Image Classification](https://arxiv.org/abs/2107.00645), Tsinghua University, 2021

[Rethinking Token-Mixing MLP for MLP-based Vision Backbone](https://arxiv.org/abs/2106.14882), Baidu Research, 2021

[Vision Permutator: A Permutable MLP-Like Architecture for Visual Recognition](https://arxiv.org/abs/2106.12368), NUS, 2021

[S2-MLP: Spatial-Shift MLP Architecture for Vision](https://arxiv.org/abs/2106.07477), Baidu Research, 2021

[Graph-MLP: Node Classification without Message Passing in Graph](https://arxiv.org/abs/2106.04051), MegVii Inc, 2021

[Container: Context Aggregation Network](https://arxiv.org/abs/2106.01401), CUHK, 2021

[Less is More: Pay Less Attention in Vision Transformers](https://arxiv.org/abs/2105.14217), Monash University, 2021

[Can Attention Enable MLPs To Catch Up With CNNs?](https://arxiv.org/abs/2105.15078), Tsinghua University, CVM 2021

[Pay Attention to MLPs](https://arxiv.org/abs/2105.08050), Google Research, 2021, [[code](https://github.com/jaketae/g-mlp)]

[FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824), Google Research, 2021, [[code](https://github.com/rishikksh20/FNet-pytorch)]

[ResMLP: Feedforward networks for image classification with data-efficient training](https://arxiv.org/abs/2105.03404), Facebook AI, CVPR 2021, [[code]](https://github.com/lucidrains/res-mlp-pytorch)

[Are Pre-trained Convolutions Better than Pre-trained Transformers?](https://arxiv.org/abs/2105.03322), Google Research, ACL 2021

[Do You Even Need Attention? A Stack of Feed-Forward Layers Does Surprisingly Well on ImageNet](https://arxiv.org/abs/2105.02723), Oxford University, 2021, [[code]](https://github.com/lukemelas/do-you-even-need-attention)

[RepMLP: Re-parameterizing Convolutions into Fully-connected Layers for Image Recognition](https://arxiv.org/abs/2105.01883), Tsinghua University, 2021, [[code]](https://github.com/DingXiaoH/RepMLP)

[Beyond Self-attention: External Attention using Two Linear Layers for Visual Tasks](https://arxiv.org/abs/2105.02358), Tsinghua University, 2021

[MLP-Mixer: An all-MLP Architecture for Vision](https://arxiv.org/abs/2105.01601), Google Research, 2021, [[code]](https://github.com/lucidrains/mlp-mixer-pytorch)

[Synthesizer: Rethinking Self-Attention in Transformer Models](https://arxiv.org/abs/2005.00743), Google Research, ICML 2021

## Contributing
Please help in contributing to this list by submitting an [issue](https://github.com/haofanwang/awesome-mlp-papers/issues) or a [pull request](https://github.com/haofanwang/awesome-mlp-papers/pulls)

```markdown
- Paper Name [[pdf]](link) [[code]](link)
```

## Other topics
More interested advanced resources collection can be found at [Awesome-Computer-Vision](https://github.com/haofanwang/Awesome-Computer-Vision)