Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/colorful-liyu/Awesome-AI-ART-generation
This is a collection of resources on AI-AR-ART generation.
https://github.com/colorful-liyu/Awesome-AI-ART-generation
List: Awesome-AI-ART-generation
Last synced: about 1 month ago
JSON representation
This is a collection of resources on AI-AR-ART generation.
- Host: GitHub
- URL: https://github.com/colorful-liyu/Awesome-AI-ART-generation
- Owner: colorful-liyu
- Created: 2021-12-10T03:24:06.000Z (about 3 years ago)
- Default Branch: main
- Last Pushed: 2022-12-14T08:44:25.000Z (almost 2 years ago)
- Last Synced: 2024-11-08T08:02:17.949Z (about 1 month ago)
- Size: 23.4 KB
- Stars: 29
- Watchers: 3
- Forks: 1
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- awesome-artificial-intelligence - Awesome-AI-ART-generation - This is a collection of resources on AI-AR-ART generation. (Image generation)
README
# Awesome-AI-ART-generation
This is a collection of resources on AI-AR-ART generation with slides!We are AI300 lab in ECE of Shanghai Jiao Tong University, and this is the sharing papers of research meeting and group discussion, including slides maded by ourselves.
## Contributing
If you think I have missed out on something (or) have any suggestions (papers, implementations and other resources), feel free to pull a request.
Feedback and contributions are welcome!
## GAN models
Papers | Conference | Year | Code | Speaker |Slides
:-------------------------------------------------------:|:------:|:----:|:----:|:------------------:|:----:|
[Alias-Free Generative Adversarial Networks (StyleGAN3)](https://arxiv.org/abs/2106.12423) | ICCV | 2021 | [here](https://github.com/NVlabs/stylegan3) | Xiaohang Wang | [here](https://1drv.ms/p/s!AlS0P3vuVTvigzN1tENqJ7I6-fgL?e=LNdqOa)
[Anycost GANs for Interactive Image Synthesis and Editing](https://arxiv.org/abs/2103.03243) | CVPR | 2021 | [here](https://github.com/mit-han-lab/anycost-gan) | Yutian Liu | x
[CoCosNet v2: Full-Resolution Correspondence Learning for Image Translation](https://arxiv.org/abs/2012.02047) |CVPR | 2021 | [here](https://github.com/microsoft/CoCosNet-v2) | Jiyao Mao | [here](https://1drv.ms/p/s!AlS0P3vuVTvigzHLrLegglUWRcPL?e=XS46dI)
[Projected GANs Converge Faster](https://arxiv.org/abs/2111.01007) | NIPS | 2021 | [here](https://github.com/autonomousvision/projected_gan) | Yuhan Li | [here](https://1drv.ms/p/s!AlS0P3vuVTvigzabp4-izhOKhtuh?e=OgaS12)
[GAN-Supervised Dense Visual Alignment](https://arxiv.org/abs/2112.05143) | arXiv | 2021 | [here](https://github.com/wpeebles/gangealing) | Yuhan Li | [here](https://1drv.ms/p/s!AlS0P3vuVTvigzd_qzNmfcws_Jdu?e=wTlcfU)
[HifiFace: 3D Shape and Semantic Prior Guided High Fidelity Face Swapping](https://arxiv.org/abs/2106.09965) | IJCAI | 2021 | [here](https://github.com/mindslab-ai/hififace) | Yutian Liu | [here](https://1drv.ms/p/s!AlS0P3vuVTvigzqnvJ8rX1sCpyWZ?e=032RMi)
[Correction Filter for Single Image Super-Resolution: Robustifying Off-the-Shelf Deep Super-Resolvers](https://arxiv.org/abs/1912.00157) | CVPR | 2020 | [here](https://www.catalyzex.com/redirect?url=https://github.com/shadyabh/Correction-Filter) | Xiaohang Wang | [here](https://1drv.ms/p/s!AlS0P3vuVTvigzkllK_O_bL-4ZOI?e=co6IXH)## GAN Edit Method
Papers | Conference | Year | Code | Speaker |Slides
:-------------------------------------------------------:|:------:|:----:|:----:|:------------------:|:----:|
How to Edit on Latent Space of GAN | x | x | x | Yuhan Li | [here](https://1drv.ms/p/s!AlS0P3vuVTvig0FhBGvLIiFInZb6?e=9h6pAj)## DDPM related models
Papers | Conference | Year | Code | Speaker |Slides
:-------------------------------------------------------:|:------:|:----:|:----:|:------------------:|:----:|
[ILVR: Conditioning Method for Denoising Diffusion Probabilistic Models](https://arxiv.org/abs/2108.02938) | ICCV | 2021 | [here](https://github.com/jychoi118/ilvr_adm) | Jiyao Mao | [here](https://1drv.ms/p/s!AlS0P3vuVTvigzHLrLegglUWRcPL?e=XS46dI)
[Denoising Diffusion Probabilistic Models](https://arxiv.org/abs/2006.11239) | NIPS | 2020 | [here](https://github.com/hojonathanho/diffusion) | Zhilin Zeng | [here](https://1drv.ms/p/s!AlS0P3vuVTvigzDpKqZGk2wH5b6h?e=ivIBLm)
[Vector Quantized Diffusion Model for Text-to-Image Synthesis](https://arxiv.org/abs/2111.14822) | arXiv | 2021 | [here](https://github.com/microsoft/VQ-Diffusion) | Zhilin Zeng | [here](https://1drv.ms/p/s!AlS0P3vuVTvigzDpKqZGk2wH5b6h?e=ivIBLm)
[Latent Diffusion Models](https://arxiv.org/abs/2112.10752) | arXiv | 2021 | [here](https://github.com/CompVis/latent-diffusion) | Yuhan Li | [here](https://1drv.ms/p/s!AlS0P3vuVTvig0CKHUSEYnVSw6y3?e=vhJ67u)
[Unleashing Transformers: Parallel Token Prediction with Discrete Absorbing Diffusion for Fast High-Resolution Image Generation from Vector-Quantized Codes](https://arxiv.org/abs/2111.12701) | arXiv | 2021 | [here](https://github.com/samb-t/unleashing-transformers) | Yuhan Li | [here](https://1drv.ms/p/s!AlS0P3vuVTvig0CKHUSEYnVSw6y3?e=vhJ67u)
[Score-Based Generative Modeling through Stochastic Differential Equations](https://arxiv.org/abs/2011.13456) | ICLR | 2021 | [here](https://github.com/yang-song/score_sde) | Yuhan Li | [here](https://blog.csdn.net/g11d111/article/details/118026427)
An Introduction About Diffusion Models | x | x | x | Yuhan Li | [here](https://1drv.ms/p/s!AlS0P3vuVTvig0iVhomWaQ_PjzKd?e=9UOBG4)## Deep Compression
Papers | Conference | Year | Code | Speaker |Slides
:-------------------------------------------------------:|:------:|:----:|:----:|:------------------:|:----:|
A survey about Deep Compression | x | x | x | Yutian Liu | [here](https://1drv.ms/b/s!AlS0P3vuVTvigzyrarUJtYsLko1l?e=CP2KL2)## Loss Landscape
Papers | Conference | Year | Code | Speaker |Slides
:-------------------------------------------------------:|:------:|:----:|:----:|:------------------:|:----:|
[Visualizing the Loss Landscape of Neural Nets](https://arxiv.org/abs/1712.09913) | NIPS | 2018 | [here](https://github.com/tomgoldstein/loss-landscape) | Yutian Liu | [here](https://1drv.ms/p/s!AlS0P3vuVTvig0a0u2LV784R0dy3?e=mH3z3n)
[How Do Vision Transformers Work?](https://arxiv.org/abs/2202.06709) | ICLR | 2022 | [here](https://github.com/xxxnell/how-do-vits-work) | Yutian Liu | [here](https://1drv.ms/p/s!AlS0P3vuVTvig0a0u2LV784R0dy3?e=mH3z3n)
[When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations](https://arxiv.org/abs/2106.01548) | ICLR | 2022 | [here](https://github.com/google-research/vision_transformer) | Yutian Liu | [here](https://1drv.ms/p/s!AlS0P3vuVTvig0a0u2LV784R0dy3?e=mH3z3n)
[An Empirical Analysis of Deep Network Loss Surfaces](https://arxiv.org/abs/1612.04010) | Machine Learning | 2017 | x | Yutian Liu | [here](https://1drv.ms/p/s!AlS0P3vuVTvig0a0u2LV784R0dy3?e=mH3z3n)## Nerf Related
Papers | Conference | Year | Code | Speaker |Slides
:-------------------------------------------------------:|:------:|:----:|:----:|:------------------:|:----:|
[Plenoxels: Radiance Fields without Neural Networks](https://arxiv.org/abs/2112.05131) | CVPR | 2022 | [here](https://alexyu.net/plenoxels/) | Xiaohang Wang | [here](https://1drv.ms/p/s!AlS0P3vuVTvig0Iu6HEwII8Xt8sE?e=E4wcLV)
[Point-NeRF: Point-based Neural Radiance Fields](https://arxiv.org/abs/2201.08845) | CVPR | 2022 | [here](https://xharlie.github.io/projects/project_sites/pointnerf/) | Xiaohang Wang | [here](https://1drv.ms/p/s!AlS0P3vuVTvig0Iu6HEwII8Xt8sE?e=E4wcLV)## Implicit Representations
Papers | Conference | Year | Code | Speaker |Slides
:-------------------------------------------------------:|:------:|:----:|:----:|:------------------:|:----:|
[(Implicit) ^2: Implicit Layers for Implicit Representations](https://openreview.net/forum?id=AcoMwAU5c0s) | ICLR | 2021 | [here](https://github.com/locuslab/ImpSq) | Xiaohang Wang | [here](https://1drv.ms/p/s!AlS0P3vuVTvigz8P7c063TKg2JhT?e=YQHSt2)## Text to image and CLIP
Papers | Conference | Year | Code | Speaker |Slides
:-------------------------------------------------------:|:------:|:----:|:----:|:------------------:|:----:|
[CLIP: Learning Transferable Visual Models From Natural Language Supervision](https://arxiv.org/abs/2103.00020) | arXiv | 2021 | [here](https://github.com/OpenAI/CLIP) | Zhilin Zeng | [here](https://1drv.ms/p/s!AlS0P3vuVTvigzvbYEr3wNPCsHD6?e=F3MOdT)
[StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery](https://openaccess.thecvf.com/content/ICCV2021/html/Patashnik_StyleCLIP_Text-Driven_Manipulation_of_StyleGAN_Imagery_ICCV_2021_paper.html) | ICCV | 2021 | [here](https://github.com/orpatashnik/StyleCLIP) | Zhilin Zeng | [here](https://1drv.ms/p/s!AlS0P3vuVTvigzvbYEr3wNPCsHD6?e=F3MOdT)
[CLIPDraw: Exploring Text-to-Drawing Synthesis through Language-Image Encoders](https://arxiv.org/abs/2106.14843) | arXiv | 2021 | [here](https://colab.research.google.com/github/kvfrans/clipdraw/blob/main/clipdraw.ipynb) | Ye Chen | [here](https://1drv.ms/p/s!AlS0P3vuVTvigz7vPn_IoNk6v14k?e=ggzqEL)
[StyleCLIPDraw: Coupling Content and Style in Text-to-Drawing Translation](https://arxiv.org/abs/2202.12362) | arXiv | 2022 | [here](https://github.com/pschaldenbrand/StyleCLIPDraw) | Ye Chen | [here](https://1drv.ms/p/s!AlS0P3vuVTvigz7vPn_IoNk6v14k?e=ggzqEL)
A survey about VectorDrawing | x | x | x | Jiyao Mao | [here](https://1drv.ms/p/s!AlS0P3vuVTvig0XWmI4R1gTHD6Kc?e=2tKAMZ)