Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/mountchicken/efficient-deep-learning
A bag of tricks to speed up your deep learning process
https://github.com/mountchicken/efficient-deep-learning
deep-learning tricks
Last synced: 15 days ago
JSON representation
A bag of tricks to speed up your deep learning process
- Host: GitHub
- URL: https://github.com/mountchicken/efficient-deep-learning
- Owner: Mountchicken
- Created: 2022-03-31T04:22:39.000Z (over 2 years ago)
- Default Branch: main
- Last Pushed: 2024-04-28T01:41:06.000Z (7 months ago)
- Last Synced: 2024-10-23T04:56:03.139Z (23 days ago)
- Topics: deep-learning, tricks
- Language: Python
- Homepage:
- Size: 16.3 MB
- Stars: 150
- Watchers: 5
- Forks: 5
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Efficient Deep Learning
Tricks here, you can have
## 1.Introduction
- With the rapid development of deep learning, more and more people are flocking to this field, including me. As a former rookie, I have experienced various problems during my deep learning process, so I create this repo here to record some tricks that can make you have an efficient deep learning. You are also welcome to raise a PR and give some of your tips!
## 2.Efficient Coding
- Strategies to code efficiently.
- [Efficient Coding](Efficient_Coding.md)
- [Use Vscode](Efficient_Coding.md#1-you-shouldnt-miss-vscode)
- [Auto code formating](Efficient_Coding.md#2-automatically-format-your-code)
- [Pre-commit hook](Efficient_Coding.md#3-use-a-pre-commit-hook-to-check-your-code)
- [Learn to use git](Efficient_Coding.md#4-learn-to-use-git)
- [Grammarly](Efficient_Coding.md#5-use-grammarly-to-check-your-writing)
- [StackOverflow](Efficient_Coding.md#6-search-on-stackoverflow-first)
- [Auto docstring](Efficient_Coding.md#7-automatically-format-your-docstring)
- [Accelerate](Efficient_Coding.md#8-accelerate)## 3.Efficient Data Processing
- Strategies to speed up your data processing.
- [Efficient Data Processing](Efficient_DataProcessing.md)
- [SSD](Efficient_DataProcessing.md#11-use-ssd-instead)
- [num_workers and pin_memory](Efficient_DataProcessing.md#12-multiple-workers-and-pinmemory-in-dataloader)
- [LMDB file](Efficient_DataProcessing.md#21-efficient-data-storage-methods)
- [Albumentations](Efficient_DataProcessing.md#22-efficient-data-augmentation-library)
- [Data augmentation on GPU](Efficient_DataProcessing.md#23-data-augmentation-on-gpu)## 4.Efficient Training
- Strategies to speed up your training process.
- [Efficient Traininig](Efficient_Training.md)
- [cudnn.benchmark=True](Efficient_Training.md#11-set-cudnnbenchmarktrue)
- [Set gradients to None during back propagation](Efficient_Training.md#12-set-gradients-to-none-during-back-propagation)
- [Turn off debugging APIs](Efficient_Training.md#13-turn-off-debugging)
- [Turn off gradient computation during validation](Efficient_Training.md#14-turn-off-gradient-computation-during-validation)
- [Use another optimizer AdamW](Efficient_Training.md#21-use-another-optimizer-adamw)
- [Learning rate schedule](Efficient_Training.md#22-learning-rate-schedule)
- [Useful combination, Adam with 3e-4](Efficient_Training.md#23-best-combination-adam-with-3e-4)
- [LR Warm up and Cosine Learning Rate Decay](Efficient_Training.md#24-lr-warm-up-and-cosine-learning-rate-decay)
- [L2 decay](Efficient_Training.md#25-l2-decay)
## 5.Efficient GPUtilization- Strategies to have a better GPU utilization.
- [Efficient GPUtilization](Efficient_GPUtilization.md)
- [CUDA out of memory solutions](Efficient_GPUtilization.md#1-cuda-out-of-memory-solutions)
- [Automatic Mixed Precision (AMP)](Efficient_GPUtilization.md#21-automatic-mixed-precisionamp)
- [Gradient Accumulation](Efficient_GPUtilization.md#22-gradient-accumulation)
- [Gradient Checkpoint](Efficient_GPUtilization.md#23-gradient-checkpoint)
- [Data parallelization training](Efficient_GPUtilization.md#31-distributed-model-training)## 6.Efficient Tools
- A list of useful tools.
- [Efficient Tools](Efficient_Tools.md)
- [Torchinfo: Visualize Network Architecture](Efficient_Tools.md#1-torchinfo-visualize-network-architecture)
- [drawio: Free graphing software](Efficient_Tools.md#2-drawio-free-graphing-software)
- [Octotree: Free gitHub code tree](Efficient_Tools.md#3-octotree-free-github-code-tree)
- [AI pair programmer: Github Copilot](Efficient_Tools.md#4-ai-pair-programmer-github-copilot)
- [PARSEC: Free Remote Desktop](Efficient_Tools.md#5-parsec-free-remote-desktop)
- [gdown: Download Large files from Google Drive in command line](Efficient_Tools.md#6-gdown-download-large-files-from-google-drive-in-command-line)
----## 7.Efficient Writting
- [Efficient Writting](Efficient_Writting.md)
- [Overleaf: Online LaTeX Editor](Efficient_Writting.md#1-overleaf-online-latex-editor)
- [Python Graph Gallery: A gallery of Python graphs](Efficient_Writting.md#2-python-graph-gallery-a-gallery-of-python-graphs)
- [I Love PDF: Online PDF Tools](Efficient_Writting.md#3-i-love-pdf-online-pdf-tools)
- [Color Hunt: Color Palettes for Designers and Artists](Efficient_Writting.md#4-color-hunt-color-palettes-for-designers-and-artists)
- [Tables Generator: Create LaTeX Tables](Efficient_Writting.md#5-tables-generator-create-latex-tables)
- [ACRONYMIFY: Name your paper with a cool acronyms](Efficient_Tools.md#6-acronymify-name-your-paper-with-a-cool-acronyms)
- [Linggle: Grammer checker](Efficient_Tools.md#7-linggle-grammer-checker)
- [arXiv latex cleaner: Clean up the latex code of your paper](Efficient_Tools.md#8-arxiv-latex-cleaner-clean-up-the-latex-code-of-your-paper)
- [Usefull prompt for ChatGPT to beautify your paper](Efficient_Tools.md#9-usefull-prompt-for-chatgpt-to-beautify-your-paper)
----
- ***"The past decade has seen tremendous progress in the field of artificial intelligence thanks to the resurgence of neural networks through deep learning. This has helped improve the ability for computers to see, hear, and understand the world around them, leading to dramatic advances in the application of AI to many fields of science and other areas of human endeavor" ——Jeffrey Dean***