https://github.com/dyslevium/learning-deep-learning
https://github.com/dyslevium/learning-deep-learning
Last synced: 3 months ago
JSON representation
- Host: GitHub
- URL: https://github.com/dyslevium/learning-deep-learning
- Owner: DYSLEVIUM
- Created: 2021-08-15T19:17:11.000Z (almost 4 years ago)
- Default Branch: main
- Last Pushed: 2024-05-24T10:00:49.000Z (about 1 year ago)
- Last Synced: 2025-01-20T19:26:42.686Z (4 months ago)
- Language: Jupyter Notebook
- Size: 37.8 MB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Papers
## NEAT
1. Evolving Neural Networks through
Augmenting Topologies https://nn.cs.utexas.edu/downloads/papers/stanley.ec02.pdf## Diffusion Models
1. Denoising Diffusion Probabilistic Models: https://arxiv.org/pdf/2006.11239
1. Diffusion Models Beat GANs on Image Synthesis: https://arxiv.org/pdf/2105.05233## GANs
1. GAN: https://arxiv.org/pdf/1406.2661.pdf
1. DCGAN: https://arxiv.org/pdf/1511.06434.pdf
1. WGAN: https://arxiv.org/pdf/1701.07875.pdf
1. GP: https://arxiv.org/pdf/1910.06922.pdf
1. Pix2Pix: https://arxiv.org/pdf/1611.07004.pdf
1. CycleGAN: https://arxiv.org/pdf/1703.10593.pdf
1. ProGAN: https://arxiv.org/pdf/1710.10196v3.pdf
1. StyleGAN: https://arxiv.org/pdf/1812.04948.pdf
1. StyleGAN2: https://arxiv.org/pdf/1912.04958.pdf
1. StyleGAN3: https://arxiv.org/pdf/2201.13433.pdf
1. StyleGAN1.T: https://arxiv.org/pdf/2301.09515.pdf
1. GauGAN: https://arxiv.org/pdf/1903.07291.pdf
1. GigaGAN: https://arxiv.org/pdf/2303.05511.pdf## Super Resolution
1. PULSE: https://arxiv.org/pdf/2003.03808.pdf
1. SRGAN: https://arxiv.org/pdf/1609.04802.pdf
1. ESRGAN: https://arxiv.org/pdf/1809.00219.pdf## Image Segmentation
1. UNet: https://arxiv.org/pdf/1505.04597
1. Attention U-Net: https://arxiv.org/pdf/1804.03999.pdf
1. R2U-Net: https://arxiv.org/ftp/arxiv/papers/1802/1802.06955.pdf## Image Recognition
1. ResNet: https://arxiv.org/pdf/1512.03385.pdf
1. VGG: https://arxiv.org/pdf/1409.1556.pdf
1. GoogleNet (InceptionV1): https://arxiv.org/pdf/1409.4842.pdf
1. LeNet: https://arxiv.org/pdf/1609.04112.pdf
1. AlexNet: https://proceedings.neurips.cc/paper_files/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf
1. EfficientNetV1: https://arxiv.org/pdf/1905.11946.pdf
1. MobileNet: https://arxiv.org/pdf/1704.04861.pdf
1. SENet: https://arxiv.org/pdf/1611.05431.pdf
1. ResNeXt: https://arxiv.org/pdf/1611.05431.pdf
1. YOLOv1 (You Only Look Once): https://arxiv.org/pdf/1506.02640.pdf
1. YOLOv2: https://arxiv.org/pdf/1612.08242.pdf
1. YOLOv3: https://arxiv.org/pdf/1804.02767.pdf
1. YOLOv5
1. YOLOv6
1. YOLOv7
1. YOLOv8## Image Restoration
1. Bringing Old Photo Back to Life: https://arxiv.org/pdf/2004.09484.pdf
## Image Colorization
1. DeOldify: https://github.com/jantic/DeOldify
1. DFDNet: https://arxiv.org/pdf/2008.00418.pdf## Blind Face Restoration
1. CodeFormer: https://arxiv.org/pdf/2206.11253.pdf
1. DFDNet: https://arxiv.org/pdf/2008.00418.pdf
1. GAN Prior Embedded Network for Blind Face Restoration in the Wild: https://arxiv.org/abs/2105.06070
1. Blind Face Restoration via Deep Multi-scale Component Dictionaries: https://arxiv.org/pdf/2008.00418.pdf## AI Photogrammetry
1. Instant-NGP: https://nvlabs.github.io/instant-ngp/assets/mueller2022instant.pdf
1. NERF: https://arxiv.org/pdf/2003.08934.pdf
1. LERF: https://arxiv.org/abs/2303.09553
1. Instruct 3D-to-3D: https://arxiv.org/abs/2303.15780
1. HyperReel: https://arxiv.org/abs/2301.02238
1. ProlificDreamer: https://arxiv.org/abs/2305.16213
1. HiFA: https://arxiv.org/pdf/2305.18766.pdf## Diffusion
1. ControlNet: https://arxiv.org/pdf/2302.05543.pdf
## Voice Cloning
1. TalkNet 2: https://arxiv.org/abs/2104.08189
2. Textless NLP: https://ai.meta.com/blog/textless-nlp-generating-expressive-speech-from-raw-audio/## Models
1. BERT:
1. Diffusion:
1. Transformers:
1. Attention Is All You Need: https://arxiv.org/pdf/1706.03762.pdf
1. Whisper
1. Vision Transformers
1. DINO Regularization: https://arxiv.org/pdf/2104.14294.pdf# Videos
##Transformers
1. https://www.youtube.com/playlist?list=PLDw5cZwIToCvXLVY2bSqt7F2gu8y-Rqje
1. https://www.youtube.com/watch?v=XowwKOAWYoQ
1. https://www.youtube.com/watch?v=XSSTuhyAmnI
1. https://www.youtube.com/watch?v=iDulhoQ2pro
1. https://www.youtube.com/watch?v=kWLed8o5M2Y
1. https://www.youtube.com/watch?v=bCz4OMemCcA## CNN
1. https://www.youtube.com/watch?v=8iIdWHjleIs
1. https://www.youtube.com/watch?v=Lakz2MoHy6o&t=275s## Neural Networks
1. https://www.youtube.com/watch?v=pauPCy_s0Ok&t=9s
1. https://www.youtube.com/watch?v=FBpPjjhJGhk## Markov Chains
1. https://www.youtube.com/playlist?list=PLM8wYQRetTxBkdvBtz-gw8b9lcVkdXQKV
## Notes to self
1. Use mixed-precision
1. Save model and optimizer on each epoch
1. Decay the learning rate
1. Use label_smoothning
1. Use l1, l2, dropout and early stopping
1. Use decorator for no_grad()
1. Probably use PyTorch Lightning