Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/baaivision/Emu
Emu Series: Generative Multimodal Models from BAAI
https://github.com/baaivision/Emu
foundation-models generative-pretraining-in-multimodality in-context-learning instruct-tuning multimodal-generalist multimodal-pretraining
Last synced: 3 months ago
JSON representation
Emu Series: Generative Multimodal Models from BAAI
- Host: GitHub
- URL: https://github.com/baaivision/Emu
- Owner: baaivision
- License: apache-2.0
- Created: 2023-07-11T00:10:19.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-03-08T07:09:57.000Z (8 months ago)
- Last Synced: 2024-06-21T18:56:01.558Z (5 months ago)
- Topics: foundation-models, generative-pretraining-in-multimodality, in-context-learning, instruct-tuning, multimodal-generalist, multimodal-pretraining
- Language: Python
- Homepage: https://baaivision.github.io/emu2/
- Size: 46.3 MB
- Stars: 1,551
- Watchers: 21
- Forks: 79
- Open Issues: 39
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- StarryDivineSky - baaivision/Emu
- AiTreasureBox - baaivision/Emu - 11-02_1653_0](https://img.shields.io/github/stars/baaivision/Emu.svg)|Emu Series: Generative Multimodal Models from BAAI| (Repos)
README
Emu: Generative Multimodal Models from BAAI
---
- [**Emu1**](Emu1) (ICLR 2024, 2023/07) - Generative Pretraining in Multimodality
- [**Emu2**](Emu2) (CVPR 2024, 2023/12) - Generative Multimodal Models are In-Context Learners
## News
- 2024.2 **Emu1 and Emu2 are accepted by ICLR 2024 and CVPR 2024 respectively! 🎉**
- 2023.12 Inference code, model and demo of Emu2 are available. Enjoy the [demo](http://218.91.113.230:9002/).
- 2023.12 We have released Emu2, open and largest generative multimodal models that achieve new state of the art on multimodal understanding and generation tasks.
- 2023.7 Inference code and model of Emu are available.
- 2023.7 We have released Emu, a multimodal generalist that can seamlessly generate images and texts in multimodal context.## Hightlights
- State-of-the-art performance
- Next-generation capabilities
- A base model for diverse tasksWe hope to foster the growth of our community through open-sourcing and promoting collaboration👬. Let's step towards multimodal intelligence together🍻.
## Contact
- **We are hiring** at all levels at BAAI Vision Team, including full-time researchers, engineers and interns.
If you are interested in working with us on **foundation model, visual perception and multimodal learning**, please contact [Xinlong Wang](https://www.xloong.wang/) (`[email protected]`).## Misc
[![Stargazers repo roster for @baaivision/Emu](https://bytecrank.com/nastyox/reporoster/php/stargazersSVG.php?user=baaivision&repo=Emu)](https://github.com/baaivision/Emu/stargazers)
[![Forkers repo roster for @baaivision/Emu](https://bytecrank.com/nastyox/reporoster/php/forkersSVG.php?user=baaivision&repo=Emu)](https://github.com/baaivision/Emu/network/members)
[![Star History Chart](https://api.star-history.com/svg?repos=baaivision/Emu&type=Date)](https://star-history.com/#baaivision/Emu&Date)