Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/hughplay/dfnet
:art: Deep Fusion Network for Image Completion - ACMMM 2019
https://github.com/hughplay/dfnet
acmmm2019 deep-learning dfnet edgeconnect fusion-block image-completion image-inpainting inpainting pytorch
Last synced: about 2 hours ago
JSON representation
:art: Deep Fusion Network for Image Completion - ACMMM 2019
- Host: GitHub
- URL: https://github.com/hughplay/dfnet
- Owner: hughplay
- License: other
- Created: 2019-04-11T09:34:20.000Z (over 5 years ago)
- Default Branch: master
- Last Pushed: 2023-04-20T02:11:07.000Z (over 1 year ago)
- Last Synced: 2024-05-14T00:05:31.091Z (6 months ago)
- Topics: acmmm2019, deep-learning, dfnet, edgeconnect, fusion-block, image-completion, image-inpainting, inpainting, pytorch
- Language: Python
- Homepage: https://hongxin2019.github.io/pdf/mm-2019-dfnet.pdf
- Size: 8.23 MB
- Stars: 215
- Watchers: 11
- Forks: 44
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- License: LICENSE.md
Awesome Lists containing this project
README
# Deep Fusion Network for Image completion
Official repository for ["Deep Fusion Network for Image completion"](https://github.com/hughplay/DFNet).
**Figure:** *Results from DFNet. Fusion Result = (1 - Alpha) \* Input + Alpha \* Raw. Both Raw and Alpha are model outputs.*
> **Deep Fusion Network for Image Completion**
> Xin Hong, Pengfei Xiong, Renhe Ji, Haoqiang Fan
> *Published on Proceedings of the 27th ACM International Conference on Multimedia (ACMMM 2019)*[![](https://img.shields.io/badge/-code-green?style=flat-square&logo=github&labelColor=gray)](https://github.com/hughplay/DFNet)
[![](https://img.shields.io/badge/-pdf-b31b1b?style=flat-square&logo=adobeacrobatreader)](https://dl.acm.org/doi/pdf/10.1145/3343031.3351002)
[![](https://img.shields.io/badge/Open_in_Colab-blue?style=flat-square&logo=google-colab&labelColor=gray)](https://colab.research.google.com/github/hughplay/DFNet/blob/master/demo.ipynb)
[![](https://img.shields.io/badge/PyTorch-ee4c2c?style=flat-square&logo=pytorch&logoColor=white)](https://pytorch.org/get-started/locally/)[![](docs/_static/imgs/hydra.svg)](https://hydra.cc)
## Description
Deep image completion usually fails to harmonically blend the restored image into existing content,
especially in the boundary area. And it often fails to complete complex structures.We first introduce **Fusion Block** for generating a flexible alpha composition map to combine known and unknown regions.
It builds a bridge for structural and texture information, so that information in known region can be naturally propagated into completion area.
With this technology, the completion results will have smooth transition near the boundary of completion area. Furthermore, the architecture of fusion block enable us to apply **multi-scale constraints**.
Multi-scale constrains improves the performance of DFNet a lot on structure consistency.Moreover, **it is easy to apply this fusion block and multi-scale constrains to other existing deep image completion models**.
A fusion block feed with feature maps and input image, will give you a completion result in the same resolution as given feature maps.If you find this code useful, please consider to star this repo and cite us:
``` bibtex
@inproceedings{hongDeepFusionNetwork2019,
title = {Deep {{Fusion Network}} for {{Image Completion}}},
booktitle = {Proceedings of the 27th {{ACM International Conference}} on {{Multimedia}}},
author = {Hong, Xin and Xiong, Pengfei and Ji, Renhe and Fan, Haoqiang},
year = {2019},
series = {{{MM}} '19},
pages = {2033--2042},
keywords = {alpha composition,deep fusion network,fusion block,image completion,inpainting}
}
```## Prerequisites
- Python 3
- PyTorch 1.0
- OpenCV## Testing
We provide an interactive [Colab demo](https://colab.research.google.com/github/hughplay/DFNet/blob/master/demo.ipynb) for trying DFNet. You can also test our model with the following steps.
Clone this repo:
``` py
git clone https://github.com/hughplay/DFNet.git
cd DFNet
```Download pre-trained model from [Google Drive](https://drive.google.com/drive/folders/1lKJg__prvJTOdgmg9ZDF9II8B1C3YSkN?usp=sharing)
and put them into `model`.### Testing with Places2 model
There are already some sample images in the `samples/places2` folder.
``` sh
python test.py --model model/model_places2.pth --img samples/places2/img --mask samples/places2/mask --output output/places2 --merge
```### Testing with CelebA model
There are already some sample images in the `samples/celeba` folder.
``` sh
python test.py --model model/model_celeba.pth --img samples/celeba/img --mask samples/celeba/mask --output output/celeba --merge
```## Training
Please refer to: https://github.com/deepcodebase/inpaint. It is building in progress but looks good so far.
## License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.