Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/zhkkke/modnet
A Trimap-Free Portrait Matting Solution in Real Time [AAAI 2022]
https://github.com/zhkkke/modnet
portrait-matting
Last synced: 7 days ago
JSON representation
A Trimap-Free Portrait Matting Solution in Real Time [AAAI 2022]
- Host: GitHub
- URL: https://github.com/zhkkke/modnet
- Owner: ZHKKKe
- License: apache-2.0
- Created: 2020-11-23T12:44:53.000Z (about 4 years ago)
- Default Branch: master
- Last Pushed: 2024-05-06T14:28:28.000Z (8 months ago)
- Last Synced: 2024-11-28T17:07:36.642Z (28 days ago)
- Topics: portrait-matting
- Language: Python
- Homepage:
- Size: 59.4 MB
- Stars: 3,835
- Watchers: 102
- Forks: 636
- Open Issues: 66
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
MODNet: Trimap-Free Portrait Matting in Real Time
MODNet: Real-Time Trimap-Free Portrait Matting via Objective Decomposition (AAAI 2022)
MODNet is a model for real-time portrait matting with only RGB image inputMODNet是一个仅需RGB图片输入的实时人像抠图模型
Online Application (在线应用) |
Research Demo |
AAAI 2022 Paper |
Supplementary Video
Community |
Code |
PPM Benchmark |
License |
Acknowledgement |
Citation |
Contact---
## Online Application (在线应用)
The model used in the online demo (unpublished) is only **7M**! Process **2K** resolution image with a **Fast** speed on common PCs or Mobiles! **Beter** than research demos!
Please try online portrait image matting on [my personal homepage](https://zhke.io/#/?modnet_demo) for fun!在线应用中使用的模型(未发布)大小仅为**7M**!可以在普通PC或移动设备上**快速**处理具有**2K**分辨率的图像!效果比研究示例**更好**!
请通过[我的主页](https://zhke.io/#/?modnet_demo)在线尝试图片抠像!## Research Demo
All the models behind the following demos are trained on the datasets mentioned in [our paper](https://arxiv.org/pdf/2011.11961.pdf).
### Portrait Image Matting
We provide an [online Colab demo](https://colab.research.google.com/drive/1GANpbKT06aEFiW-Ssx0DQnnEADcXwQG6?usp=sharing) for portrait image matting.
It allows you to upload portrait images and predict/visualize/download the alpha mattes.### Portrait Video Matting
We provide two real-time portrait video matting demos based on WebCam. When using the demo, you can move the WebCam around at will.
If you have an Ubuntu system, we recommend you to try the [offline demo](demo/video_matting/webcam) to get a higher *fps*. Otherwise, you can access the [online Colab demo](https://colab.research.google.com/drive/1Pt3KDSc2q7WxFvekCnCLD8P0gBEbxm6J?usp=sharing).
We also provide an [offline demo](demo/video_matting/custom) that allows you to process custom videos.## Community
We share some cool applications/extentions of MODNet built by the community.
- **Colab Demo of Bokeh (Blur Background)**
You can try [this Colab demo](https://colab.research.google.com/github/eyaler/avatars4all/blob/master/yarok.ipynb) (built by [@eyaler](https://github.com/eyaler)) to blur the backgroud based on MODNet!- **ONNX Version of MODNet**
You can convert the pre-trained MODNet to an ONNX model by using [this code](onnx) (provided by [@manthan3C273](https://github.com/manthan3C273)). You can also try [this Colab demo](https://colab.research.google.com/drive/1P3cWtg8fnmu9karZHYDAtmm1vj1rgA-f?usp=sharing) for MODNet image matting (ONNX version).- **TorchScript Version of MODNet**
You can convert the pre-trained MODNet to an TorchScript model by using [this code](torchscript) (provided by [@yarkable](https://github.com/yarkable)).- **TensorRT Version of MODNet**
You can access [this Github repository](https://github.com/jkjung-avt/tensorrt_demos) to try the TensorRT version of MODNet (provided by [@jkjung-avt](https://github.com/jkjung-avt)).- **Docker Container for MODnet**
You can access [this Github repository](https://github.com/nahidalam/modnet_docker) for a containerized version of MODNet with the Docker environment (provided by [@nahidalam](https://github.com/nahidalam)).There are some resources about MODNet from the community.
- [Video from What's AI YouTube Channel](https://youtu.be/rUo0wuVyefU)
- [Article from Louis Bouchard's Blog](https://www.louisbouchard.ai/remove-background/)## Code
We provide the [code](src/trainer.py) of MODNet training iteration, including:
- **Supervised Training**: Train MODNet on a labeled matting dataset
- **SOC Adaptation**: Adapt a trained MODNet to an unlabeled datasetIn code comments, we provide examples for using the functions.
## PPM Benchmark
The PPM benchmark is released in a separate repository [PPM](https://github.com/ZHKKKe/PPM).## License
The code, models, and demos in this repository (excluding GIF files under the folder `doc/gif`) are released under the [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0) license.## Acknowledgement
- We thank
[@yzhou0919](https://github.com/yzhou0919), [@eyaler](https://github.com/eyaler), [@manthan3C273](https://github.com/manthan3C273), [@yarkable](https://github.com/yarkable), [@jkjung-avt](https://github.com/jkjung-avt), [@manzke](https://github.com/manzke), [@nahidalam](https://github.com/nahidalam),
[the Gradio team](https://github.com/gradio-app/gradio), [What's AI YouTube Channel](https://www.youtube.com/channel/UCUzGQrN-lyyc0BWTYoJM_Sg), [Louis Bouchard's Blog](https://www.louisbouchard.ai),
for their contributions to this repository or their cool applications/extentions/resources of MODNet.## Citation
If this work helps your research, please consider to cite:```bibtex
@InProceedings{MODNet,
author = {Zhanghan Ke and Jiayu Sun and Kaican Li and Qiong Yan and Rynson W.H. Lau},
title = {MODNet: Real-Time Trimap-Free Portrait Matting via Objective Decomposition},
booktitle = {AAAI},
year = {2022},
}
```## Contact
This repository is maintained by Zhanghan Ke ([@ZHKKKe](https://github.com/ZHKKKe)).
For questions, please contact `[email protected]`.