Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/facebookresearch/mmf
A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)
https://github.com/facebookresearch/mmf
captioning deep-learning dialog hateful-memes multi-tasking multimodal pretrained-models pytorch textvqa vqa
Last synced: 10 days ago
JSON representation
A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)
- Host: GitHub
- URL: https://github.com/facebookresearch/mmf
- Owner: facebookresearch
- License: other
- Created: 2018-06-27T04:52:40.000Z (over 6 years ago)
- Default Branch: main
- Last Pushed: 2024-05-25T03:02:04.000Z (6 months ago)
- Last Synced: 2024-06-12T08:23:55.868Z (5 months ago)
- Topics: captioning, deep-learning, dialog, hateful-memes, multi-tasking, multimodal, pretrained-models, pytorch, textvqa, vqa
- Language: Python
- Homepage: https://mmf.sh/
- Size: 17.1 MB
- Stars: 5,432
- Watchers: 115
- Forks: 923
- Open Issues: 146
-
Metadata Files:
- Readme: README.md
- Contributing: .github/CONTRIBUTING.md
- License: LICENSE
- Code of conduct: .github/CODE_OF_CONDUCT.md
Awesome Lists containing this project
- awesome-vision-language-pretraining - [github
- awesome-python-machine-learning-resources - GitHub - 30% open · ⏱️ 11.08.2022): (图像数据与CV)
- awesome-list - MMF - A modular framework for vision and language multimodal research by Facebook AI Research, based on PyTorch. (Deep Learning Framework / High-Level DL APIs)
README
#
---
MMF is a modular framework for vision and language multimodal research from Facebook AI Research. MMF contains reference implementations of state-of-the-art vision and language models and has powered multiple research projects at Facebook AI Research. See full list of project inside or built on MMF [here](https://mmf.sh/docs/notes/projects).
MMF is powered by PyTorch, allows distributed training and is un-opinionated, scalable and fast. Use MMF to **_bootstrap_** for your next vision and language multimodal research project by following the [installation instructions](https://mmf.sh/docs/). Take a look at list of MMF features [here](https://mmf.sh/docs/getting_started/features).
MMF also acts as **starter codebase** for challenges around vision and
language datasets (The Hateful Memes, TextVQA, TextCaps and VQA challenges). MMF was formerly known as Pythia. The next video shows an overview of how datasets and models work inside MMF. Checkout MMF's [video overview](https://mmf.sh/docs/getting_started/video_overview).## Installation
Follow installation instructions in the [documentation](https://mmf.sh/docs/).
## Documentation
Learn more about MMF [here](https://mmf.sh/docs).
## Citation
If you use MMF in your work or use any models published in MMF, please cite:
```bibtex
@misc{singh2020mmf,
author = {Singh, Amanpreet and Goswami, Vedanuj and Natarajan, Vivek and Jiang, Yu and Chen, Xinlei and Shah, Meet and
Rohrbach, Marcus and Batra, Dhruv and Parikh, Devi},
title = {MMF: A multimodal framework for vision and language research},
howpublished = {\url{https://github.com/facebookresearch/mmf}},
year = {2020}
}
```## License
MMF is licensed under BSD license available in [LICENSE](LICENSE) file