https://github.com/XingangPan/DragGAN
Official Code for DragGAN (SIGGRAPH 2023)
https://github.com/XingangPan/DragGAN
artificial-intelligence generative-adversarial-network generative-models image-manipulation
Last synced: about 1 month ago
JSON representation
Official Code for DragGAN (SIGGRAPH 2023)
- Host: GitHub
- URL: https://github.com/XingangPan/DragGAN
- Owner: XingangPan
- License: other
- Created: 2023-05-18T10:08:02.000Z (almost 2 years ago)
- Default Branch: main
- Last Pushed: 2024-05-18T17:51:40.000Z (11 months ago)
- Last Synced: 2024-10-14T12:01:16.370Z (6 months ago)
- Topics: artificial-intelligence, generative-adversarial-network, generative-models, image-manipulation
- Language: Python
- Homepage: https://vcai.mpi-inf.mpg.de/projects/DragGAN/
- Size: 33.2 MB
- Stars: 35,672
- Watchers: 996
- Forks: 3,448
- Open Issues: 151
-
Metadata Files:
- Readme: README.md
- License: LICENSE.txt
Awesome Lists containing this project
- awesome-generative-ai - DragGAN - Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold. (Image / Models)
- Awesome-AITools - GitHub
- ai-game-devtools - DragGAN - based Manipulation on the Generative Image Manifold. |[arXiv](https://arxiv.org/abs/2305.10973) | | Image | (<span id="image">Image</span> / <span id="tool">Tool (AI LLM)</span>)
- stars - XingangPan/DragGAN - Official Code for DragGAN (SIGGRAPH 2023) (Python)
- awesome-mad - DragGAN - 抠图+补图的融合,适合制作木偶动画。如果控制点更加精细的话,估计效果会更好。 (抠图/补图 / 特效/实用工具)
- awesome-ai-tools - DragGAN - Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold. (Image / Models)
- awesome-generative-ai - DragGAN - Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold. (Image / Models)
- awesome-ai - DragGAN - Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold. (Image / Models)
- my-awesome - XingangPan/DragGAN - intelligence,generative-adversarial-network,generative-models,image-manipulation pushed_at:2024-05 star:35.9k fork:3.4k Official Code for DragGAN (SIGGRAPH 2023) (Python)
- StarryDivineSky - XingangPan/DragGAN
- awesome-diffusion-categorized - [Code
- awesome-llm-and-aigc - DragGAN - inf.mpg.de/projects/DragGAN/)**). (Summary)
- awesome-llm-and-aigc - DragGAN - inf.mpg.de/projects/DragGAN/)**). (Summary)
- AiTreasureBox - XingangPan/DragGAN - 04-07_35905_0](https://img.shields.io/github/stars/XingangPan/DragGAN.svg) <a alt="Click Me" href="https://vcai.mpi-inf.mpg.de/projects/DragGAN/" target="_blank"><img src="https://img.shields.io/badge/Mpg-Project-brightgreen" alt="Open in Spaces"/></a> <a href='https://arxiv.org/abs/2305.10973'><img src='https://img.shields.io/badge/Paper-Arxiv-red'></a> |Code for DragGAN (SIGGRAPH 2023)| (Repos)
- awesome-ai - DragGAN - Interactive Point-based Manipulation on the Generative Image Manifold (AI-Powered Creativity Tools / Images)
README
Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold
Xingang Pan
·
Ayush Tewari
·
Thomas Leimkühler
·
Lingjie Liu
·
Abhimitra Meka
·
Christian Theobalt
SIGGRAPH 2023 Conference Proceedings
![]()
## Web Demos
[](https://openxlab.org.cn/apps/detail/XingangPan/DragGAN)
## Requirements
If you have CUDA graphic card, please follow the requirements of [NVlabs/stylegan3](https://github.com/NVlabs/stylegan3#requirements).
The usual installation steps involve the following commands, they should set up the correct CUDA version and all the python packages
```
conda env create -f environment.yml
conda activate stylegan3
```Then install the additional requirements
```
pip install -r requirements.txt
```Otherwise (for GPU acceleration on MacOS with Silicon Mac M1/M2, or just CPU) try the following:
```sh
cat environment.yml | \
grep -v -E 'nvidia|cuda' > environment-no-nvidia.yml && \
conda env create -f environment-no-nvidia.yml
conda activate stylegan3# On MacOS
export PYTORCH_ENABLE_MPS_FALLBACK=1
```## Run Gradio visualizer in Docker
Provided docker image is based on NGC PyTorch repository. To quickly try out visualizer in Docker, run the following:
```sh
# before you build the docker container, make sure you have cloned this repo, and downloaded the pretrained model by `python scripts/download_model.py`.
docker build . -t draggan:latest
docker run -p 7860:7860 -v "$PWD":/workspace/src -it draggan:latest bash
# (Use GPU)if you want to utilize your Nvidia gpu to accelerate in docker, please add command tag `--gpus all`, like:
# docker run --gpus all -p 7860:7860 -v "$PWD":/workspace/src -it draggan:latest bashcd src && python visualizer_drag_gradio.py --listen
```
Now you can open a shared link from Gradio (printed in the terminal console).
Beware the Docker image takes about 25GB of disk space!## Download pre-trained StyleGAN2 weights
To download pre-trained weights, simply run:
```
python scripts/download_model.py
```
If you want to try StyleGAN-Human and the Landscapes HQ (LHQ) dataset, please download weights from these links: [StyleGAN-Human](https://drive.google.com/file/d/1dlFEHbu-WzQWJl7nBBZYcTyo000H9hVm/view?usp=sharing), [LHQ](https://drive.google.com/file/d/16twEf0T9QINAEoMsWefoWiyhcTd-aiWc/view?usp=sharing), and put them under `./checkpoints`.Feel free to try other pretrained StyleGAN.
## Run DragGAN GUI
To start the DragGAN GUI, simply run:
```sh
sh scripts/gui.sh
```
If you are using windows, you can run:
```
.\scripts\gui.bat
```This GUI supports editing GAN-generated images. To edit a real image, you need to first perform GAN inversion using tools like [PTI](https://github.com/danielroich/PTI). Then load the new latent code and model weights to the GUI.
You can run DragGAN Gradio demo as well, this is universal for both windows and linux:
```sh
python visualizer_drag_gradio.py
```## Acknowledgement
This code is developed based on [StyleGAN3](https://github.com/NVlabs/stylegan3). Part of the code is borrowed from [StyleGAN-Human](https://github.com/stylegan-human/StyleGAN-Human).
(cheers to the community as well)
## LicenseThe code related to the DragGAN algorithm is licensed under [CC-BY-NC](https://creativecommons.org/licenses/by-nc/4.0/).
However, most of this project are available under a separate license terms: all codes used or modified from [StyleGAN3](https://github.com/NVlabs/stylegan3) is under the [Nvidia Source Code License](https://github.com/NVlabs/stylegan3/blob/main/LICENSE.txt).Any form of use and derivative of this code must preserve the watermarking functionality showing "AI Generated".
## BibTeX
```bibtex
@inproceedings{pan2023draggan,
title={Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold},
author={Pan, Xingang and Tewari, Ayush, and Leimk{\"u}hler, Thomas and Liu, Lingjie and Meka, Abhimitra and Theobalt, Christian},
booktitle = {ACM SIGGRAPH 2023 Conference Proceedings},
year={2023}
}
```