Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/DDreher/AwesomeMLForDigitalMedia
A curated list of awesome machine learning resources in the context of digital media and (interactive) computer graphics.
https://github.com/DDreher/AwesomeMLForDigitalMedia
List: AwesomeMLForDigitalMedia
awesome-list character-animation computational-imaging computer-graphics computer-vision deep-learning digital-media game-development machine-learning neural-rendering
Last synced: 3 months ago
JSON representation
A curated list of awesome machine learning resources in the context of digital media and (interactive) computer graphics.
- Host: GitHub
- URL: https://github.com/DDreher/AwesomeMLForDigitalMedia
- Owner: DDreher
- License: cc0-1.0
- Created: 2020-09-13T16:08:13.000Z (about 4 years ago)
- Default Branch: master
- Last Pushed: 2022-08-14T07:23:26.000Z (about 2 years ago)
- Last Synced: 2024-05-20T00:08:12.444Z (6 months ago)
- Topics: awesome-list, character-animation, computational-imaging, computer-graphics, computer-vision, deep-learning, digital-media, game-development, machine-learning, neural-rendering
- Homepage:
- Size: 50.8 KB
- Stars: 25
- Watchers: 5
- Forks: 3
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- ultimate-awesome - AwesomeMLForDigitalMedia - A curated list of awesome machine learning resources in the context of digital media and (interactive) computer graphics. (Other Lists / PowerShell Lists)
README
![banner](https://github.com/DDreher/awesome-ml-for-digital-media/blob/master/assets/banner.png)
A curated list of resources closing the gap between machine learning and digital media (computer graphics, computer vision, computational imaging, animation, vfx, game development,...).
No in-depth explanations, just an overview of the landscape and possible starting points for further research.This research field is rather broad and resources tend to intersect multiple fields at the same time.
I try to sort them into the category that I believe is most fitting._Feel free to contribute._
____
## Table of Contents
* [Audio](#audio)
* [Character Animation](#character-animation)
* [Computer Graphics](#computer-graphics)
* [Computer Vision](#computer-vision)
* [Neural Rendering](#neural-rendering)
* [Visual Computing](#visual-computing)## Audio
* **Real-Time Guitar Amplifier Emulation with Deep Learning** (2020), Wright et al.
[[link]](https://www.mdpi.com/2076-3417/10/3/766) [[pdf]](https://www.mdpi.com/2076-3417/10/3/766/pdf) [[demo]](http://research.spa.aalto.fi/publications/papers/applsci-deep/)## Character Animation
### Papers
* **DeepPhase: Periodic Autoencoders for Learning Motion Phase Manifolds** (2022), Starke et al.
[[link]](https://github.com/sebastianstarke/AI4Animation) [[pdf]](https://github.com/sebastianstarke/AI4Animation/blob/master/Media/SIGGRAPH_2022/Paper.pdf) [[video]](https://www.youtube.com/watch?v=YhH4PYEkVnY)* **RigNet: Neural Rigging for Articulated Characters** (2020), Xu et al.
[[link]](https://zhan-xu.github.io/rig-net/) [[pdf]](https://people.cs.umass.edu/~zhanxu/papers/RigNet.pdf)* **Learned Motion Matching** (2020), Holden et al.
[[link]](http://theorangeduck.com/page/learned-motion-matching) [[pdf]](http://theorangeduck.com/media/uploads/other_stuff/Learned_Motion_Matching.pdf)* **Local Motion Phases for Learning Multi-Contact Character Movements** (2020), Starke et al.
[[link]](https://github.com/sebastianstarke/AI4Animation) [[pdf]](https://github.com/sebastianstarke/AI4Animation/raw/master/Media/SIGGRAPH_2020/Paper.pdf)* **Neural state machine for character-scene interactions** (2019), Starke et al.
[[link]](https://github.com/sebastianstarke/AI4Animation) [[pdf]](https://github.com/sebastianstarke/AI4Animation/raw/master/Media/SIGGRAPH_Asia_2019/Paper.pdf)* **DReCon: Data-Driven Responsive Control of Physics-Based Characters** (2019), Bergamin et al.
[[link]](https://montreal.ubisoft.com/en/drecon-data-driven-responsive-control-of-physics-based-characters/) [[pdf]](https://static-wordpress.akamaized.net/montreal.ubisoft.com/wp-content/uploads/2019/11/13214229/DReCon.pdf)* **Mode-adaptive neural networks for quadruped motion control** (2018), Zhang et al.
[[link]](https://github.com/sebastianstarke/AI4Animation) [[pdf]](https://github.com/sebastianstarke/AI4Animation/raw/master/Media/SIGGRAPH_2018/Paper.pdf)* **Phase-Functioned Neural Networks for Character Control** (2017), Holden et al.
[[link]](http://theorangeduck.com/page/phase-functioned-neural-networks-character-control) [[pdf]](http://theorangeduck.com/media/uploads/other_stuff/phasefunction.pdf)### Datasets
* **LAFAN1 - Ubisoft La Forge Animation Dataset** (2020), Harvey et al.
[[link]](https://github.com/ubisoft/Ubisoft-LaForge-Animation-Dataset)### Projects
* **AI4Animation: Deep Learning, Character Animation, Control**
[[link]](https://github.com/sebastianstarke/AI4Animation)## Computer Graphics
### Papers
* **Temporally Stable Real-Time Joint Neural Denoising and Supersampling** (2022), Thomas et al.
[[link]](https://momentsingraphics.de/HPG2022.html) [[pdf]](https://momentsingraphics.de/Media/HPG2022/thomas2022-temporally_stable_real_time_joint_neural_denoising_and_supersampling.pdf) [[video]](https://www.youtube.com/watch?v=au4cPLuEpNM&t=7874s)* **MaterialGAN: Reflectance Capture using a Generative SVBRDF Model** (2020), Guo et al.
[[link]](https://shuangz.com/projects/materialgan-sa20/) [[pdf]](https://shuangz.com/projects/materialgan-sa20/materialgan-sa20.pdf)* **Neural Supersampling for Real-time Rendering** (2020), Xiao et al.
[[link]](https://research.fb.com/blog/2020/07/introducing-neural-supersampling-for-real-time-rendering/) [[pdf]](https://research.fb.com/wp-content/uploads/2020/06/Neural-Supersampling-for-Real-time-Rendering.pdf)* **Using Deep Convolutional Neural Networks to Detect Rendered Glitches in Video Games** (2020), Ling et al.
[[link]](https://www.ea.com/seed/news/using-deep-convolutional-neural-networks-detect-glitches?Campaign_Source=ea+insiders&es_id=b22058bee4) [[pdf]](https://media.contentapi.ea.com/content/dam/ea/seed/presentations/seed-using-deep-convolutional-neural-networks-detect-glitches-paper.pdf)### Talks / Courses / Tutorials / Workshops
* **CreativeAI: Deep Learning for Graphics** (2019), Mitra et al.
[[link]](https://geometry.cs.ucl.ac.uk/creativeai/)### Projects
* **Real-time style transfer in Unity using deep neural networks** (2020), Deliot et al.
[[link]](https://blogs.unity3d.com/2020/11/25/real-time-style-transfer-in-unity-using-deep-neural-networks/)## Computer Vision
### Papers
* **SMALR - Capturing Animal Shape and Texture from Images** (2018), Zuffi et al.
[[link]](http://smalr.is.tue.mpg.de/)### Talks / Courses / Tutorials / Workshops
* **3DGV: Seminar on 3D Geometry and Vision** (2020)
[[link]](https://3dgv.github.io/)### Datasets
* **Hypersim: A Photorealistic Synthetic Dataset for Holistic Indoor Scene Understanding** (2020), Roberts et al.
[[link]](https://github.com/apple/ml-hypersim/)* **KITTI-360: A large-scale dataset with 3D&2D annotations** (2020), Xie et al.
[[link]](http://www.cvlibs.net/datasets/kitti-360/)## Neural Rendering
### State of the Art / Surveys
* **Advances in Neural Rendering** (2022), Tewari et al.
[[link]](https://4dqv.mpi-inf.mpg.de/star_neural_rendering/) [[pdf]](http://arxiv.org/abs/2111.05849) [[video]](https://www.youtube.com/watch?v=ul9hFFtWYv8)* **State of the Art on Neural Rendering** (2020), Tewari et al.
[[link]](http://www.niessnerlab.org/projects/tewari2020neuralrendering.html) [[pdf]](https://arxiv.org/pdf/2004.03805.pdf)### Papers
* **NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis** (2020), Mildenhall et al.
[[link]](https://www.matthewtancik.com/nerf) [[pdf]](https://arxiv.org/pdf/2003.08934)* **Deformable Neural Radiance Fields** (2020), Park et al.
[[link]](https://nerfies.github.io/) [[pdf]](https://arxiv.org/pdf/2011.12948.pdf)* **NeX: Real-time View Synthesis with Neural Basis Expansion** (2021), Wizadwongsa et al.
[[link]](https://nex-mpi.github.io/) [[pdf]](https://arxiv.org/pdf/2103.05606.pdf)* **Deep Relightable Appearance Models for Animatable Faces** (2021), Bi et al.
[[link]](https://sai-bi.github.io/project/sig21_avatar/index.html) [[pdf]](https://drive.google.com/file/d/11cj0mdPlpO6_c1rfTeGp7I7j5mu7vtYf/view) [[video]](https://www.youtube.com/watch?v=5YigyNvt4GE)* **GANcraft - Unsupervised 3D Neural Rendering of Minecraft Worlds** (2021), Hao et al.
[[link]](https://nvlabs.github.io/GANcraft/) [[pdf]](https://arxiv.org/pdf/2104.07659.pdf) [[video]](https://www.youtube.com/watch?v=1Hky092CGFQ&feature=emb_title)* **X-Fields: Implicit Neural View-, Light- and Time-Image Interpolation** (2020), Bemana et al.
[[link]](http://xfields.mpi-inf.mpg.de/) [[pdf]](http://xfields.mpi-inf.mpg.de/paper/X_Fields__siggasia_2020.pdf)* **Learning to Simulate Dynamic Environments with GameGAN** (2020), Kim et al.
[[link]](https://nv-tlabs.github.io/gameGAN/) [[pdf]](https://arxiv.org/pdf/2005.12126.pdf)* **D-NeRF: Neural Radiance Fields for Dynamic Scenes** (2020), Pumarola et al.
[[link]](https://www.albertpumarola.com/research/D-NeRF/) [[pdf]](https://arxiv.org/pdf/2011.13961)* **VR Facial Animation via Multiview Image Translation** (2019), Wei et al.
[[link]](https://research.fb.com/publications/vr-facial-animation-via-multiview-image-translation/) [[pdf]](https://research.fb.com/wp-content/uploads/2019/06/VR-Facial-Animation-via-Multiview-Image-Translation.pdf)* **Face2Face: Real-time Face Capture and Reenactment of RGB Videos** (2019), Thies et al. [[link]](http://www.niessnerlab.org/projects/thies2018face.html) [[pdf]](http://www.niessnerlab.org/papers/2019/8facetoface/thies2018face.pdf)
* **Deep Appearance Models for Face Rendering** (2018), Lombardi et al.
[[link]](https://research.fb.com/publications/deep-appearance-models-for-face-rendering/) [[pdf]](https://research.fb.com/wp-content/uploads/2018/08/Deep-Appearance-Models-for-Face-Rendering.pdf)* **Deep Shading: Convolutional Neural Networks for Screen-Space Shading** (2017), Nalbach et al.
[[link]](http://deep-shading-datasets.mpi-inf.mpg.de/) [[pdf]](http://deep-shading-datasets.mpi-inf.mpg.de/deep-shading.pdf)### Talks / Courses / Tutorials / Workshops
* **Neural Rendering (CVPR Tutorial)** (2020)
[[link]](https://www.neuralrender.com/) [[video 1]](https://www.youtube.com/watch?v=LCTYRqW-ne8) [[video 2]](https://www.youtube.com/watch?v=JlyGNvbGKB8&feature=youtu.be)## Visual Computing
### Papers
* **Hierarchical Text-Conditional Image Generation with CLIP Latents** (2022), Ramesh et al.
[[link]](https://openai.com/dall-e-2/) [[pdf]](https://arxiv.org/abs/2204.06125)* **Zero-Shot Text-to-Image Generation** (2021), Ramesh et al.
[[link]](https://openai.com/blog/dall-e/) [[pdf]](https://arxiv.org/abs/2102.12092) [[code]](https://github.com/openai/dall-e)* **Infinite Nature: Perpetual View Generation of Natural Scenes from a Single Image** (2020), Liu et al.
[[link]](https://infinite-nature.github.io) [[pdf]](https://arxiv.org/pdf/2012.09855) [[code]](https://github.com/google-research/google-research/blob/master/infinite_nature) [[colab]](https://colab.research.google.com/github/google-research/google-research/blob/master/infinite_nature/infinite_nature_demo.ipynb) [[video]](https://youtu.be/oXUf6anNAtc)* **Stylized Neural Painting** (2020), Zou et al.
[[link]](https://jiupinjia.github.io/neuralpainter/) [[pdf]](https://arxiv.org/abs/2011.08114) [[code]](https://github.com/jiupinjia/stylized-neural-painting) [[colab]](https://colab.research.google.com/drive/1XwZ4VI12CX2v9561-WD5EJwoSTJPFBbr?usp=sharing) [[video]](https://www.youtube.com/watch?v=oerb-nwrXhk&feature=emb_title)* **Semantic Image Synthesis with Spatially-Adaptive Normalization** (2019), Park et al.
[[link]](https://nvlabs.github.io/SPADE/) [[pdf]](https://arxiv.org/pdf/1903.07291.pdf)* **Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network** (2016), Ledig et al.
[[link]](https://arxiv.org/abs/1609.04802) [[pdf]](https://arxiv.org/pdf/1609.04802)### Talks / Courses / Tutorials / Workshops
* **TUM AI Lecture Series - AI for 3D Content Creation** (2020), Sanja Fidler
[[video]](https://www.youtube.com/watch?v=pTTxPq8uZmg&feature=youtu.be)____
# License
[![CC0](http://mirrors.creativecommons.org/presskit/buttons/88x31/svg/cc-zero.svg)](https://creativecommons.org/publicdomain/zero/1.0/)