Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/agamiko/100-days-of-code
My 100 days journey with coding to improve my Machine Learning, Deep Learning, Data Science skills
https://github.com/agamiko/100-days-of-code
acoustics computer-vision data-science deep-learning image-processing machine-learning natural-language-processing neural-networks
Last synced: 15 days ago
JSON representation
My 100 days journey with coding to improve my Machine Learning, Deep Learning, Data Science skills
- Host: GitHub
- URL: https://github.com/agamiko/100-days-of-code
- Owner: AgaMiko
- License: mit
- Created: 2020-04-23T11:05:45.000Z (almost 5 years ago)
- Default Branch: master
- Last Pushed: 2020-05-04T11:35:37.000Z (almost 5 years ago)
- Last Synced: 2024-12-04T10:47:00.295Z (2 months ago)
- Topics: acoustics, computer-vision, data-science, deep-learning, image-processing, machine-learning, natural-language-processing, neural-networks
- Size: 794 KB
- Stars: 4
- Watchers: 3
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# 100-days-of-code
My 100 days journey with coding to improve my Machine Learning, Deep Learning, Data Science skills. Posting current updates on twitter: https://twitter.com/AgnMikolajczyk## Day 1
I continued with wav2wav #autoencoder for audio style transfer. Loss decreased gradually during the training, so I've started writing audio generation script Nuty Also, I've added json configuration files with parameters of e.g. melspectrogram.![](images/wav2wav.PNG)
* Wav2wav autoencoder: https://arxiv.org/pdf/1904.08983.pdf
## Day 2
Had a little fun with casual convolutions in wav-to-wav autoencoder in #pytorch. Reimplemented&tried to understand it better. Later, experimented a bit with different architectures.* Casual convolutions: https://medium.com/the-artificial-impostor/notes-understanding-tensorflow-part-3-7f6633fcc7c7
## Day 3
I came up with an idea for a fun project after working hours. I always wanted to train GANs but popular datasets seemed boring -> I started scraping data with Selenium! It's my first time trying it out and first successfully clicked button felt awesome* Selenium: https://www.selenium.dev/
* Selenium for data scraping: https://medium.com/the-andela-way/introduction-to-web-scraping-using-selenium-7ec377a8cf72
* BeautifulSoup: https://www.crummy.com/software/BeautifulSoup/bs4/doc/## Day 4
Started generating random RPG-like pixel characters with LPC spritesheet: http://tinyurl.com/yb6yg7kw. I'm creating my own dataset for my experiment with Generative Adversarial Networks. Soon I'll start implementing it in #pytorch* LPC spritesheet: http://tinyurl.com/yb6yg7kw.
* Pytorch: https://pytorch.org/My notebooks:
* [Download random spritesheet with Selenium](https://github.com/AgaMiko/pixel_character_generator/blob/master/notebooks/1_download_random_spritesheet.ipynb)
* [Extract character from spritesheet](https://github.com/AgaMiko/pixel_character_generator/blob/master/notebooks/2_extract_character.ipynb)![](images/58.png) ![](images/58_2.png) ![](images/58_3.png) ![](images/58_4.png)
![](images/212.png) ![](images/212_2.png) ![](images/212_3.png) ![](images/212_4.png)
![](images/584.png) ![](images/584_2.png) ![](images/584_3.png) ![](images/584_4.png)## Day 5
Implemented DCGAN from pytorch tutorial.
* Experimented with latent size (input for Generator) and feature map sizes
* Added soft and noisy labels
* Added Wasserstein loss which is a good solution for mode collapse
* Wasserstein loss - https://developers.google.com/machine-learning/gan/loss
* mode collapse - https://machinelearningmastery.com/practical-guide-to-gan-failure-modes/
* Added dropout in both generator and discriminator* [DCGAN Tutorial](https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html)
### Dataset
![](images/random_chars.png)* [Download dataset here](https://github.com/AgaMiko/pixel_character_generator)
### Results
* [My final code with modified DCGAN](https://github.com/AgaMiko/pixel_character_generator/blob/master/notebooks/3_DCGAN.ipynb)![](images/real_fake.png)
## Day 6
Today practiced a bit with custom data loaders in pytorch: https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
Started writing generator code for my wav-to-wav autoencoder.## Day 7
First week finished! Today, I've experimented with a XLA for TPU with multiprocessing, the notebook shared on Kaggle for Flower classification challenge. https://kaggle.com/dhananjay3/fast-pytorch-xla-for-tpu-with-multiprocessing
https://www.kaggle.com/c/flower-classification-with-tpus/notebooks?sortBy=relevance&group=everyone&search=pytorch&page=1&pageSize=20&competitionId=18278## Day 8
Continued having fun with #GenerativeAdversarialNetwork - this time Conditional GANs on retro pixel game dataset - http://github.com/AgaMiko/pixel_character_generator
I've managed to modify code and run the training, can't wait to see the results## Day 9
Today finished Conditional DCGAN experiments! GAN generates a pixel character seen from selected angle.* different learning rate for discriminator and generator
* soft labels
* added classification loss to the discriminator. Discriminator have to guess fake/real but also the character angle
* generator is conditioned with embedding from trainable look-up table that gives the info about the character view angle![](https://github.com/AgaMiko/pixel_character_generator/blob/master/images/CGAN.PNG)
* [notebook with modified Conditional DCGAN](https://github.com/AgaMiko/pixel_character_generator/blob/master/notebooks/4_Conditional_DCGAN.ipynb)