Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/ahkarami/great-deep-learning-tutorials

A Great Collection of Deep Learning Tutorials and Repositories
https://github.com/ahkarami/great-deep-learning-tutorials

computer-vision deep-learning deep-learning-tutorial deep-neural-networks gan machine-learning nlp

Last synced: about 20 hours ago
JSON representation

A Great Collection of Deep Learning Tutorials and Repositories

Awesome Lists containing this project

README

        

# Great-Deep-Learning-Tutorials
A Great Collection of Deep Learning Tutorials and Repositories

## General Deep Learning Tutorials:
- [Browse state-of-the-art Deep Learning based Papers with their associated codes](https://paperswithcode.com/sota) [_Extremely Fantastic_]
- [Deep-Learning-Roadmap](https://github.com/astorfi/Deep-Learning-Roadmap)
- [DeepLizard](https://deeplizard.com/) [_Good Tutorials for Deep Learning_]
- [Sebastian Ruder - Blog](https://ruder.io/) [_Great NLP & Deep Learning Posts_]
- [Jeremy Jordan - Blog](https://www.jeremyjordan.me/author/jeremy/)
- [Excellent Blog](https://lilianweng.github.io/lil-log/)
- [Torchvision Release Notes](https://github.com/pytorch/vision/releases) [_Important_]
- [The 6 most useful Machine Learning projects of the past year (2018)](https://towardsdatascience.com/the-10-most-useful-machine-learning-projects-of-the-past-year-2018-5378bbd4919f)
- [ResNet Review](https://towardsdatascience.com/review-resnet-winner-of-ilsvrc-2015-image-classification-localization-detection-e39402bfa5d8)
- [Receptive Field Estimation](https://github.com/fornaxai/receptivefield) [_Great_]
- [An overview of gradient descent optimization algorithms](https://ruder.io/optimizing-gradient-descent/) [_Useful_]
- [How to decide on learning rate](https://towardsdatascience.com/how-to-decide-on-learning-rate-6b6996510c98)
- [Overview of State-of-the-art Machine Learning Algorithms per Discipline per Task](https://towardsdatascience.com/overview-state-of-the-art-machine-learning-algorithms-per-discipline-per-task-c1a16a66b8bb)
- [Practical Machine Learning](https://github.com/youssefHosni/Practical-Machine-Learning)
- [Awesome Machine Learning and AI Courses](https://github.com/luspr/awesome-ml-courses)
- [UVA Deep Learning II Course](https://uvadl2c.github.io/)
- [PyTorch Book](https://github.com/chenyuntc/pytorch-book)
- [Fast.ai Course: Practical Deep Learning for Coders](https://course.fast.ai/Lessons/lesson1.html) [**Great**]
- [Neuromatch Deep Learning Course](https://deeplearning.neuromatch.io/tutorials/intro.html) [**Great**]
- [labmlai: 59 Implementations/tutorials of deep learning papers with side-by-side notes](https://github.com/labmlai/annotated_deep_learning_paper_implementations) [**Great**]
- [labml.ai](https://nn.labml.ai/index.html)
- [FightingCV-Paper-Reading: understand the most advanced research work in an easier way](https://github.com/xmu-xiaoma666/FightingCV-Paper-Reading)
- [Learn PyTorch for Deep Learning: Zero to Mastery Course](https://github.com/mrdbourke/pytorch-deep-learning) [**Excellent**]
- [ML Papers Explained](https://github.com/dair-ai/ML-Papers-Explained) [**Excellent**]
- [Alpha Signal: Latest Research in Machine Learning](https://alphasignal.ai/)
- [Harvard CS197: AI Research Experiences - The Course Book](https://docs.google.com/document/u/0/d/1uvAbEhbgS_M-uDMTzmOWRlYxqCkogKRXdbKYYT98ooc/mobilebasic#heading=h.bko37p9m9o8g) [**Excellent**]
- [Deep learning jupyter notebook book](https://udlbook.github.io/udlbook/)
- [A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT](https://arxiv.org/abs/2302.09419)
- [interconnects.ai: Great AI Blog Posts & Podcasts](https://www.interconnects.ai/)
- [The Fundamental of Modern Deep Learning with PyTorch (short Course)](https://www.linkedin.com/posts/sebastianraschka_github-rasbtpycon2024-tutorial-materials-activity-7196468139289677827-qUlf?utm_source=share&utm_medium=member_android)
- [EfficientML Course](https://www.youtube.com/playlist?list=PL80kAHvQbh-pT4lCkDT53zT8DKmhE0idB) [Great]
- [Andrej Karpathy's Neural Networks: Zero to Hero Course](https://www.youtube.com/playlist?list=PLAqhIrjkxbuWI23v9cThsA9GvCAUhRvKZ)

## Deep Learning Useful Resources for Computer Vision:
- [Great Deep Learning Resources for Computer Vision Tasks](https://github.com/ahkarami/Great-Deep-Learning-Tutorials/blob/master/ComputerVision.md) [_Excellent_]

## Deep Learning Useful Resources for Natural Language Processing (NLP):
- [Great Deep Learning Resources for NLP Tasks](https://github.com/ahkarami/Great-Deep-Learning-Tutorials/blob/master/NLP.md) [_Excellent_]

## Deep Learning Useful Resources for Spoken Language Processing (Speech Processing):
- [Great Deep Learning Resources for Speech Processing Tasks](https://github.com/ahkarami/Great-Deep-Learning-Tutorials/blob/master/Speech.md) [_Excellent_]

## Deep Learning & Machine Learning Useful Resources for General Data Science Tasks:
- [Great Deep Learning Resources for Data Science Tasks](https://github.com/ahkarami/Great-Deep-Learning-Tutorials/blob/master/DataScience.md) [_Excellent_]

## General Notes about Generative AI:
- [Generative AI in action: real-world applications and examples](https://lablab.ai/blog/generative-ai-in-action-real-world-applications-and-examples)

## Quantization & Distillation of Deep Learning Models:
- [Quantization](https://nervanasystems.github.io/distiller/quantization/)
- [Neural Network Distiller](https://github.com/NervanaSystems/distiller/)
- [Introduction to Quantization on PyTorch](https://pytorch.org/blog/introduction-to-quantization-on-pytorch/) [_Excellent_]
- [Dynamic Quantization in PyTorch](https://pytorch.org/tutorials/advanced/dynamic_quantization_tutorial.html)
- [Static Quantization in PyTorch](https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html)
- [Intel(R) Math Kernel Library - Intel MKL-DNN](https://github.com/intel/mkl-dnn)
- [Intel MKL-Dnn](https://01.org/mkl-dnn)
- [ONNX Float32 to Float16](https://github.com/onnx/onnx-docker/blob/master/onnx-ecosystem/converter_scripts/float32_float16_onnx.ipynb)
- [Neural Network Quantization Introduction](https://jackwish.net/neural-network-quantization-introduction.html) [_Tutorial_]
- [Quantization in Deep Learning](https://medium.com/@joel_34050/quantization-in-deep-learning-478417eab72b) [_Tutorial_]
- [Speeding up Deep Learning with Quantization](https://towardsdatascience.com/speeding-up-deep-learning-with-quantization-3fe3538cbb9) [_Tutorial_]
- [Knowledge Distillation in Deep Learning](https://medium.com/analytics-vidhya/knowledge-distillation-dark-knowledge-of-neural-network-9c1dfb418e6a)
- [Model Distillation Techniques for Deep Learning](https://heartbeat.fritz.ai/research-guide-model-distillation-techniques-for-deep-learning-4a100801c0eb)
- [MMRazor: model compression toolkit](https://github.com/open-mmlab/mmrazor) [Great]
- [FP8 Quantization: The Power of the Exponent](https://github.com/Qualcomm-AI-research/FP8-quantization)
- [Quanto: a pytorch quantization toolkit](https://huggingface.co/blog/quanto-introduction) [**Great**]

## AutoML:
- [Auto Gluon AI](https://auto.gluon.ai/stable/index.html#)
- [AWS Auto Gluon](https://github.com/awslabs/autogluon)

## Diffusion Models:
- [Diffusion Models via lilianweng](https://lilianweng.github.io/posts/2021-07-11-diffusion-models/)
- [Diffusion Models Papers Survey Taxonomy](https://github.com/YangLing0818/Diffusion-Models-Papers-Survey-Taxonomy)
- [Phenaki: a text-to-video model](https://github.com/LAION-AI/phenaki)

## Multimodal Deep Learning:
- [Multimodal Deep Learning Book](https://arxiv.org/abs/2301.04856)
- [Understanding MultiModal LLMs](https://www.linkedin.com/posts/sebastianraschka_there-has-been-a-lot-of-new-research-on-the-activity-7258836067129139200-QJkr?utm_source=share&utm_medium=member_desktop)

## Deep Reasoning:
- [What’s Next For AI? Enter: Deep Reasoning](https://towardsdatascience.com/whats-next-for-ai-enter-deep-reasoning-fae8b131962a)
- [Deep Learning approaches to understand Human Reasoning](https://towardsdatascience.com/deep-learning-approaches-to-understand-human-reasoning-46f1805d454d)

## Deep Reinforcement Learning (Great Courses & Tutorials):
- [A Free course in Deep Reinforcement Learning from beginner to expert](https://simoninithomas.github.io/Deep_reinforcement_learning_Course/) [_Great_]
- [Deep Reinforcement Learning Algorithms with PyTorch](https://github.com/p-christ/Deep-Reinforcement-Learning-Algorithms-with-PyTorch)
- [Deep Reinforcement Learning - CS 285 Berkeley Course](rail.eecs.berkeley.edu/deeprlcourse/)
- [solutions to UC Berkeley CS 285](https://github.com/xuanlinli17/CS285_Fa19_Deep_Reinforcement_Learning)
- [Reinforcement Learning: An Introduction - main book in this field](http://www.incompleteideas.net/book/the-book-2nd.html)
- [CS234: Reinforcement Learning Course](https://www.youtube.com/playlist?list=PLoROMvodv4rOSOPzutgyCTapiGlY2Nd8u)
- [Introduction to Reinforcement Learning Course - by DeepMind](https://www.youtube.com/playlist?list=PLoROMvodv4rOSOPzutgyCTapiGlY2Nd8u)

## Graph Neural Networks:
- [An Introduction to Graph Neural Networks](https://towardsdatascience.com/an-introduction-to-graph-neural-networks-e23dc7bdfba5)
- [How to Train Graph Convolutional Network Models in a Graph Database](https://towardsdatascience.com/how-to-train-graph-convolutional-network-models-in-a-graph-database-5c919a2f95d7)
- [A comprehensive survey on graph neural networks](https://arxiv.org/pdf/1901.00596)
- [Graph Neural Networks: A Review of Methods and Applications](https://arxiv.org/abs/1812.08434)

### Graph Neural Networks Frameworks:
- [Spektral](https://github.com/danielegrattarola/spektral)
- [Deep Graph Library - DGL](https://www.dgl.ai/)
- [PyTorch Geometric - PyG](https://github.com/rusty1s/pytorch_geometric)
- [ptgnn: A PyTorch GNN Library](https://github.com/microsoft/ptgnn)
- [Graph Data Augmentation Papers](https://github.com/zhao-tong/graph-data-augmentation-papers)
- [Neo4j: Graph Data Platform](https://neo4j.com/)

## Best Practices for Training Deep Models:

### General Notes for Training Deep Models:
- [Deep Learning Tuning Playbook](https://github.com/google-research/tuning_playbook)

### PyTorch Lightening Notes & Accumulate Gradients:
- [PyTorch Lightening: Effective Training Techniques](https://pytorch-lightning.readthedocs.io/en/latest/advanced/training_tricks.html)
- [Gradient Accumulation in PyTorch](https://kozodoi.me/python/deep%20learning/pytorch/tutorial/2021/02/19/gradient-accumulation.html)

### Loss Functions:
- [Loss Functions Explained](https://medium.com/deep-learning-demystified/loss-functions-explained-3098e8ff2b27)

### Imbalanced Dataset Handling:
- [deal with an imbalanced dataset using weightedrandomsampler](https://androidkt.com/deal-with-an-imbalanced-dataset-using-weightedrandomsampler-in-pytorch/)
- [imbalanced-dataset-sampler](https://github.com/ufoym/imbalanced-dataset-sampler) [Great]
- [demystifying pytorchs weightedrandomsampler](https://towardsdatascience.com/demystifying-pytorchs-weightedrandomsampler-by-example-a68aceccb452)
- [weighted random sampler oversample or undersample](https://stackoverflow.com/questions/67799246/weighted-random-sampler-oversample-or-undersample)

### Weight Initialization:
- [Deep Learning Best Practices (1) - Weight Initialization](https://medium.com/usf-msds/deep-learning-best-practices-1-weight-initialization-14e5c0295b94)

### Batch Normalization:
- [Batch Normalization in Neural Networks](https://towardsdatascience.com/batch-normalization-in-neural-networks-1ac91516821c)
- [Batch Normalization and Dropout in Neural Networks](https://towardsdatascience.com/batch-normalization-and-dropout-in-neural-networks-explained-with-pytorch-47d7a8459bcd)
- [Difference between Local Response Normalization and Batch Normalization](https://towardsdatascience.com/difference-between-local-response-normalization-and-batch-normalization-272308c034ac)

### Learning Rate Scheduling & Initialization:
- [Automated Learning Rate Suggester](https://forums.fast.ai/t/automated-learning-rate-suggester/44199)
- [Learning Rate Finder - fastai](https://fastai1.fast.ai/callbacks.lr_finder.html)
- [Cyclical Learning Rates for Training Neural Networks](https://arxiv.org/abs/1506.01186)
- [ignite - Example of FastaiLRFinder](https://github.com/pytorch/ignite/blob/master/examples/notebooks/FastaiLRFinder_MNIST.ipynb)
- [Find Learning Rate - a gist code](https://gist.github.com/colllin/738cd2a9f0abec9be5e8b9becc23a812)
- [Learning rate finder - PyTorch Lightning](https://pytorch-lightning.readthedocs.io/en/1.1.3/lr_finder.html)
- [RAdam - On the Variance of the Adaptive Learning Rate and Beyond](https://github.com/LiyuanLucasLiu/RAdam)

### Early Stopping:
- [Early Stopping in PyTorch - Bjarten](https://github.com/Bjarten/early-stopping-pytorch)
- [Catalyst - Early Stopping](https://catalyst-team.github.io/catalyst/faq/early_stopping.html)
- [ignite - Early Stopping](https://github.com/pytorch/ignite/blob/master/ignite/handlers/early_stopping.py)
- [PyTorch High-Level Training Sample](https://github.com/ncullen93/torchsample/blob/master/README.md)
- [PyTorch Discussion about Early Stopping](https://discuss.pytorch.org/t/early-stopping-in-pytorch/18800)

### Tuning Guide Recipes:
- [PyTorch Tuning Guide Tutorial](https://pytorch.org/tutorials/recipes/recipes/tuning_guide.html)
- [PyTorch memory leak with dynamic size tensor input](https://github.com/pytorch/pytorch/issues/29893)
- [Karpathy: A Recipe for Training Neural Networks](http://karpathy.github.io/2019/04/25/recipe/)

### Training Optimizer:
- [What is gradient accumulation in deep learning](https://towardsdatascience.com/what-is-gradient-accumulation-in-deep-learning-ec034122cfa)

### PyTorch running & training on TPU (colab):
- [PyTorch XLA](https://github.com/pytorch/xla)
- [PyTorch XLA Colab](https://github.com/pytorch/xla/tree/master/contrib/colab)

### Evaluation Metrics:
- [Performance Metrics for Classification Problems in ML](https://medium.com/@MohammedS/performance-metrics-for-classification-problems-in-machine-learning-part-i-b085d432082b)

### Validating ML Models:
- [Deepchecks: Validating ML Models & Data](https://github.com/deepchecks/deepchecks)

### Optimizing models when run on GPU:
- [Tips for reducing vram of gpu memories](https://www.linkedin.com/posts/pauliusztin_machinelearning-mlops-datascience-activity-7137704771905277953-G8Qt?utm_source=share&utm_medium=member_desktop)

## Conferences News:
- [Latest Computer Vision Trends from CVPR 2019](https://towardsdatascience.com/latest-computer-vision-trends-from-cvpr-2019-c07806dd570b)
- [Interesting 2019 CVPR papers](https://medium.com/@mattmiesnieks/interesting-2019-cvpr-papers-865e303db5ca)
- [Summaries of CVPR papers on ShortScience.org](https://www.shortscience.org/venue?key=conf/cvpr)
- [Summaries of ICCV papers on ShortScience.org](https://www.shortscience.org/venue?key=conf/iccv)
- [Summaries of ECCV papers on ShortScience.org](https://www.shortscience.org/venue?key=conf/eccv)
- [Meta ICLR 2024 Top Papers](https://www.linkedin.com/posts/aiatmeta_iclr2024-activity-7194398361943171074-XiVG?utm_source=share&utm_medium=member_android)

## Deep Learning Frameworks and Infrustructures:
- [set-up a Paperspace GPU Server](https://towardsdatascience.com/how-to-set-up-a-powerful-and-cost-efficient-gpu-server-for-deep-learning-aa1de0d4ea56)
- [Distributed ML with OpenMPI](https://clusterone.com/tutorials/openmpi-introduction)
- [Tensorflow 2.0 vs Mxnet](https://medium.com/@mouryarishik/tensorflow-2-0-vs-mxnet-41edd3b7574f)
- [TensorFlow is dead, long live TensorFlow!](https://hackernoon.com/tensorflow-is-dead-long-live-tensorflow-49d3e975cf04)

## Great Libraries:
- [The Unified Machine Learning Framework](https://github.com/unifyai/ivy)
- [Skorch - A scikit-learn compatible neural network library that wraps PyTorch](https://github.com/skorch-dev/skorch)
- [Hummingbird - traditional ML models into tensor computations via PyTorch](https://github.com/microsoft/hummingbird)
- [BoTorch - Bayesian Optimization in PyTorch](https://botorch.org/)
- [torchvision 0.3: segmentation, detection models, new datasets and more](https://pytorch.org/blog/torchvision03/)
- [TorchAudio: an audio library for PyTorch](https://github.com/pytorch/audio)
- [AudTorch](https://github.com/audeering/audtorch)
- [TorchAudio-Contrib](https://github.com/keunwoochoi/torchaudio-contrib)
- [fastText - Facebook AI Research (FAIR)](https://fasttext.cc/)
- [Fairseq - Facebook AI Research (FAIR)](https://github.com/pytorch/fairseq)
- [ParlAI - dialogue models - Facebook AI Research (FAIR)](https://parl.ai/)
- [DALI - highly optimized engine for data pre-processing](https://github.com/NVIDIA/DALI)
- [Netron - GitHub](https://github.com/lutzroeder/netron) [_Visualizer for deep learning Models (Excellent)_]
- [Netron - Web Site](https://www.lutzroeder.com/ai)
- [JupyterLab GPU Dashboards](https://github.com/rapidsai/jupyterlab-nvdashboard) [_Good_]
- [PyTorch Hub](https://pytorch.org/hub)
- [Neural Structured Learning (NSL) in TensorFlow](https://github.com/tensorflow/neural-structured-learning)
- [Pywick - High-Level Training framework for Pytorch](https://github.com/achaiah/pywick)
- [torchbearer: A model fitting library for PyTorch](https://github.com/pytorchbearer/torchbearer)
- [torchlayers - Shape inference for PyTorch (like in Keras)](https://github.com/szymonmaszke/torchlayers)
- [torchtext - GitHub](https://github.com/pytorch/text)
- [torchtext - Doc](https://torchtext.readthedocs.io/en/latest/)
- [Optuna - hyperparameter optimization framework](https://optuna.org/)
- [PyTorchLightning](https://github.com/PyTorchLightning/pytorch-lightning)
- [Nvidia - runx - An experiment management tool](https://github.com/NVIDIA/runx)
- [MLogger: a Machine Learning logger](https://github.com/oval-group/mlogger)
- [ClearML - ML/DL development and production suite](https://github.com/allegroai/clearml)
- [Lime: Explaining the predictions of any ML classifier](https://github.com/marcotcr/lime)
- [Microsoft UniLM AI](https://github.com/microsoft/unilm) [Great]
- [mlnotify: No need to keep checking your training](https://github.com/aporia-ai/mlnotify)
- [NVIDIA NeMo - toolkit for creating Conversational AI (ASR, TTS, and NLP)](https://github.com/NVIDIA/NeMo)
- [Microsoft DeepSpeed](https://github.com/microsoft/DeepSpeed)
- [Mojo: a new programming language for AI developers](https://www.modular.com/mojo)
- [MLX: An array framework for Apple silicon](https://github.com/ml-explore/mlx)

## Great Models:
- [ResNext WSL](https://pytorch.org/hub/facebookresearch_WSL-Images_resnext/) [_Great Pretrained Model_]
- [Semi-Weakly Supervised (SWSL) ImageNet Models](https://pytorch.org/hub/facebookresearch_semi-supervised-ImageNet1K-models_resnext/) [_Great Pretrained Model_]
- [Deep High-Resolution Representation Learning (HRNet)](https://jingdongwang2017.github.io/Projects/HRNet/)

## Deep Model Conversion:
- [Convert Full ImageNet Pre-trained Model from MXNet to PyTorch](https://blog.paperspace.com/convert-full-imagenet-pre-trained-model-from-mxnet-to-pytorch/) [_Great_]
- [ONNX Runtime](https://github.com/microsoft/onnxruntime)

## Great Deep Learning Repositories (for learning DL-based programming):
- [deeplearning-models - PyTorch & TensorFlow Learning](https://github.com/rasbt/deeplearning-models) [_Very Excellent Repository_]
- [PyTorch Image Models](https://github.com/rwightman/pytorch-image-models) [_Great_]
- [5 Advanced PyTorch Tools to Level up Your Workflow](https://towardsdatascience.com/5-advanced-pytorch-tools-to-level-up-your-workflow-d0bcf0603ad5) [_Interesting_]

## PyTorch High-Level Libraries:
- [Catalyst - PyTorch framework for Deep Learning research and development](https://github.com/catalyst-team/catalyst) [_Great_]
- [PyTorch Lightning - GitHub](https://github.com/PyTorchLightning/pytorch-lightning) [_Great_]
- [PyTorch Lightning - Web Page](https://pytorchlightning.ai/)
- [Ignite - GitHub](https://github.com/pytorch/ignite) [_Great_]
- [Ignite - Web Page](https://pytorch.org/ignite/)
- [TorchMetrics](https://torchmetrics.readthedocs.io/en/latest/)
- [Ludwig AI: Data-centric declarative deep learning framework](https://github.com/ludwig-ai/ludwig) [**Great**]
- [PyTorch Kineto: CPU+GPU Profiling library](https://github.com/pytorch/kineto/)
- [PyTorch Profiler](https://pytorch.org/docs/master/profiler.html)
- [PyTorch Benchmarks](https://github.com/pytorch/benchmark)

## Annotation Tools:
- [label-studio](https://github.com/heartexlabs/label-studio)
- [label-studio with RTL Support (for Persian)](https://github.com/mmaghajani/label-studio)

## Other:
- [Clova AI Research - NAVER & LINE](https://github.com/clovaai)
- [Exploring Weight Agnostic Neural Networks](https://ai.googleblog.com/2019/08/exploring-weight-agnostic-neural.html)
- [Weight Agnostic Neural Networks](https://weightagnostic.github.io/)
- [Weight Agnostic Neural Networks - GitHub](https://github.com/google/brain-tokyo-workshop/tree/master/WANNRelease)
- [SAM: Sharpness-Aware Minimization for Efficiently Improving Generalization](https://github.com/google-research/sam)
- [Qualcomm Discusses Secret Dataset Generation Data](https://www.qualcomm.com/news/onq/2021/09/16/qa-ai-researcher-roland-memisevic-discusses-secret-dataset-generation-data)
- [State of AI Report 2021](https://www.stateof.ai/)
- [State of AI Report 2024](https://www.stateof.ai/)
- [Project Blink: AI-powered video editing on the web](https://labs.adobe.com/projects/blink/)
- [PyTorch Incremental Learning](https://github.com/yaoyao-liu/class-incremental-learning)
- [Google Research, 2022 & Beyond: Language, Vision and Generative Models](https://ai.googleblog.com/2023/01/google-research-2022-beyond-language.html?m=1&fbclid=PAAabtVizCEKhFC2kttHKozuEz4FX1cphjNDQVVL-kFZHA11GP9AVJ6rl9W-k)
- [Elicit: Ask a research question](https://elicit.org/) [Interesting]
- [Google People + AI Research (PAIR)](https://pair.withgoogle.com/) [Interesting business based AI topics]
- [Google Illuminate](https://illuminate.google.com/home) [Great]