{"id":13631257,"url":"https://github.com/dataflowr/notebooks","last_synced_at":"2025-05-14T16:08:31.025Z","repository":{"id":44989269,"uuid":"148169964","full_name":"dataflowr/notebooks","owner":"dataflowr","description":"code for deep learning courses","archived":false,"fork":false,"pushed_at":"2025-03-04T11:22:01.000Z","size":140143,"stargazers_count":1109,"open_issues_count":0,"forks_count":314,"subscribers_count":27,"default_branch":"master","last_synced_at":"2025-04-05T19:07:13.299Z","etag":null,"topics":["deep-learning","pytorch","tutorials"],"latest_commit_sha":null,"homepage":"http://www.dataflowr.com","language":"Jupyter Notebook","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/dataflowr.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2018-09-10T14:40:17.000Z","updated_at":"2025-04-03T01:58:22.000Z","dependencies_parsed_at":"2023-11-12T21:30:25.963Z","dependency_job_id":"ef901ac3-795c-4f0c-ac47-c2f92da9b2ff","html_url":"https://github.com/dataflowr/notebooks","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dataflowr%2Fnotebooks","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dataflowr%2Fnotebooks/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dataflowr%2Fnotebooks/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dataflowr%2Fnotebooks/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/dataflowr","download_url":"https://codeload.github.com/dataflowr/notebooks/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248631668,"owners_count":21136554,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["deep-learning","pytorch","tutorials"],"created_at":"2024-08-01T22:02:18.068Z","updated_at":"2025-04-12T20:38:48.132Z","avatar_url":"https://github.com/dataflowr.png","language":"Jupyter Notebook","funding_links":[],"categories":["Jupyter Notebook","Tutorials"],"sub_categories":[],"readme":"# [Dataflowr: Deep Learning DIY](https://www.dataflowr.com/)\n\n[![Dataflowr](https://raw.githubusercontent.com/dataflowr/website/master/_assets/dataflowr_logo.png)](https://dataflowr.github.io/website/)\n\nCode and notebooks for the deep learning course [dataflowr](https://www.dataflowr.com/). Here is the schedule followed at école polytechnique in 2023:\n\n## :sunflower:Session:one: Finetuning VGG\n\n\u003e- [Module 1 - Introduction \u0026 General Overview](https://dataflowr.github.io/website/modules/1-intro-general-overview/)\nSlides + notebook Dogs and Cats with VGG + Practicals (more dogs and cats) \n\u003cdetails\u003e\n  \u003csummary\u003eThings to remember\u003c/summary\u003e\n\n\u003e - you do not need to understand everything to run a deep learning model! But the main goal of this course will be to come back to each step done today and understand them...\n\u003e - to use the dataloader from Pytorch, you need to follow the API (i.e. for classification store your dataset in folders)\n\u003e - using a pretrained model and modifying it to adapt it to a similar task is easy. \n\u003e - if you do not understand why we take this loss, that's fine, we'll cover that in Module 3.\n\u003e - even with a GPU, avoid unnecessary computations!\n\n\u003c/details\u003e\n\n## :sunflower:Session:two: PyTorch tensors and Autodiff\n\n\u003e- [Module 2a - PyTorch tensors](https://dataflowr.github.io/website/modules/2a-pytorch-tensors/)\n\u003e- [Module 2b - Automatic differentiation](https://dataflowr.github.io/website/modules/2b-automatic-differentiation/) + Practicals\n\u003e- MLP from scratch start of [HW1](https://dataflowr.github.io/website/homework/1-mlp-from-scratch/) \n\u003e- [another look at autodiff with dual numbers and Julia](https://github.com/dataflowr/notebooks/blob/master/Module2/AD_with_dual_numbers_Julia.ipynb)\n\u003cdetails\u003e\n  \u003csummary\u003eThings to remember\u003c/summary\u003e\n\n\u003e- Pytorch tensors = Numpy on GPU + gradients!\n\u003e- in deep learning, [broadcasting](https://numpy.org/doc/stable/user/basics.broadcasting.html) is used everywhere. The rules are the same as for Numpy.\n\u003e- Automatic differentiation is not only the chain rule! Backpropagation algorithm (or dual numbers) is a clever algorithm to implement automatic differentiation...\n\n \u003c/details\u003e\n\n## :sunflower:Session:three: \n\u003e - [Module 3 - Loss function for classification](https://dataflowr.github.io/website/modules/3-loss-functions-for-classification/) \n\u003e - [Module 4 - Optimization for deep learning](https://dataflowr.github.io/website/modules/4-optimization-for-deep-learning/)\n\u003e - [Module 5 - Stacking layers](https://dataflowr.github.io/website/modules/5-stacking-layers/) and overfitting a MLP on CIFAR10: [Stacking_layers_MLP_CIFAR10.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module5/Stacking_layers_MLP_CIFAR10.ipynb)\n\u003e - [Module 6: Convolutional neural network](https://dataflowr.github.io/website/modules/6-convolutional-neural-network/)\n\u003e - how to regularize with dropout and uncertainty estimation with MC Dropout: [Module 15 - Dropout](https://dataflowr.github.io/website/modules/15-dropout/)\n\u003cdetails\u003e\n  \u003csummary\u003eThings to remember\u003c/summary\u003e\n\n\u003e- Loss vs Accuracy. Know your loss for a classification task!\n\u003e- know your optimizer (Module 4)\n\u003e- know how to build a neural net with torch.nn.module (Module 5)\n\u003e- know how to use convolution and pooling layers (kernel, stride, padding)\n\u003e- know how to use dropout \n\n\u003c/details\u003e\n\n## :sunflower:Session:four:\n\u003e - [Module 7 - Dataloading](https://dataflowr.github.io/website/modules/7-dataloading/)\n\u003e - [Module 8a - Embedding layers](https://dataflowr.github.io/website/modules/8a-embedding-layers/)\n\u003e - [Module 8b - Collaborative filtering](https://dataflowr.github.io/website/modules/8b-collaborative-filtering/) and build your own recommender system: [08_collaborative_filtering_empty.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module8/08_collaborative_filtering_empty.ipynb) (on a larger dataset [08_collaborative_filtering_1M.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module8/08_collaborative_filtering_1M.ipynb))\n\u003e - [Module 8c - Word2vec](https://dataflowr.github.io/website/modules/8c-word2vec/) and build your own word embedding [08_Word2vec_pytorch_empty.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module8/08_Word2vec_pytorch_empty.ipynb)\n\u003e - [Module 16 - Batchnorm](https://dataflowr.github.io/website/modules/16-batchnorm/) and check your understanding with [16_simple_batchnorm_eval.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module16/16_simple_batchnorm_eval.ipynb) and more [16_batchnorm_simple.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module16/16_batchnorm_simple.ipynb)\n\u003e - [Module 17 - Resnets](https://dataflowr.github.io/website/modules/17-resnets/) and transform your classifier into an out-of-distribution detector with [ODIN_mobilenet_empty.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module17/ODIN_mobilenet_empty.ipynb)\n\u003e - start of [Homework 2: Class Activation Map and adversarial examples](https://dataflowr.github.io/website/homework/2-CAM-adversarial/)\n\n\u003cdetails\u003e\n  \u003csummary\u003eThings to remember\u003c/summary\u003e\n\n\u003e - know how to use dataloader\n\u003e - to deal with categorical variables in deep learning, use embeddings\n\u003e - in the case of word embedding, starting in an unsupervised setting, we built a supervised task (i.e. predicting central / context words in a window) and learned the representation thanks to negative sampling\n\u003e - know your batchnorm\n\u003e - architectures with skip connections allows deeper models\n\n\u003c/details\u003e\n\n## :sunflower:Session:five:\n\u003e - [Module 9a: Autoencoders](https://dataflowr.github.io/website/modules/9a-autoencoders/) and code your noisy autoencoder [09_AE_NoisyAE.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module9/09_AE_NoisyAE.ipynb)\n\u003e - [Module 10: Generative Adversarial Networks]() and code your GAN, Conditional GAN and InfoGAN [10_GAN_double_moon.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module10/10_GAN_double_moon.ipynb)\n\u003e - [Module 13: Siamese Networks and Representation Learning](https://dataflowr.github.io/website/modules/13-siamese/)\n\u003e - start of [Homework 3: VAE for MNIST clustering and generation](https://dataflowr.github.io/website/homework/3-VAE/)\n\n## :sunflower:Session:six:\n\u003e - [Module 11a - Recurrent Neural Networks theory](https://dataflowr.github.io/website/modules/11a-recurrent-neural-networks-theory/)\n\u003e - [Module 11b - Recurrent Neural Networks practice](https://dataflowr.github.io/website/modules/11b-recurrent-neural-networks-practice/) and predict engine failure with [11\\_predictions\\_RNN\\_empty.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module11/11_predictions_RNN_empty.ipynb)\n\u003e - [Module 11c - Batches with sequences in Pytorch](https://dataflowr.github.io/website/modules/11c-batches-with-sequences/)\n\n## :sunflower:Session:seven:\n\u003e - [Module 12 - Attention and Transformers](https://dataflowr.github.io/website/modules/12-attention/)\n\u003e - Correcting the PyTorch tutorial on attention in seq2seq: [12_seq2seq_attention.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module12/12_seq2seq_attention.ipynb)\n\u003e - Build your own microGPT: [GPT_hist.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module12/GPT_hist.ipynb)\n## :sunflower:Session:eight:\n\u003e - [Module 9b - UNets](https://dataflowr.github.io/website/modules/9b-unet/)\n\u003e - [Module 9c - Flows](https://dataflowr.github.io/website/modules/9c-flows/)\n\u003e - Build your own Real NVP: [Normalizing_flows_empty.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module9/Normalizing_flows_empty.ipynb)\n## :sunflower:Session:nine:\n\u003e - [Module 18a - Denoising Diffusion Probabilistic Models](https://dataflowr.github.io/website/modules/18a-diffusion/)\n\u003e - Train your own DDPM on MNIST: [ddpm_nano_empty.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module18/ddpm_nano_empty.ipynb)\n\u003e - Finetuning on CIFAR10: [ddpm_micro_sol.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module18/ddpm_micro_sol.ipynb)\n\nFor more updates: [![Twitter URL](https://img.shields.io/twitter/url/https/twitter.com/marc_lelarge.svg?style=social\u0026label=Follow%20%40marc_lelarge)](https://twitter.com/marc_lelarge) \n# :sunflower: All notebooks\n\n- [**Module 1: Introduction \u0026 General Overview**](https://dataflowr.github.io/website/modules/1-intro-general-overview/) \n    - Intro: finetuning VGG for dogs vs cats [01_intro.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module1/01_intro.ipynb)\n    - Practical: Using CNN for more dogs and cats [01_practical_empty.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module1/01_practical_empty.ipynb) and its solution [01_practical_sol.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module1/sol/01_practical_sol.ipynb)\n- [**Module 2: Pytorch tensors and automatic differentiation**](https://dataflowr.github.io/website/modules/2a-pytorch-tensors/)\n    - Basics on PyTorch tensors and automatic differentiation [02a_basics.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module2/02a_basics.ipynb)\n    - Linear regression from numpy to pytorch [02b_linear_reg.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module2/02b_linear_reg.ipynb)\n    - Practical: implementing backprop from scratch [02_backprop.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module2/02_backprop.ipynb) and its solution [02_backprop_sol.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module2/sol/02_backprop_sol.ipynb)\n    - Bonus: intro to JAX: autodiff the functional way [autodiff_functional_empty.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module2/autodiff_functional_empty.ipynb) and its solution [autodiff_functional_sol.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module2/autodiff_functional_sol.ipynb)\n    - Bonus: Linear regression in JAX [linear_regression_jax.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module2/linear_regression_jax.ipynb)\n    - Bonus: automatic differentiation with dual numbers [AD_with_dual_numbers_Julia.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module2/AD_with_dual_numbers_Julia.ipynb)\n- [**Homework 1: MLP from scratch**](https://dataflowr.github.io/website/homework/1-mlp-from-scratch/)\n    - [hw1_mlp.ipynb](https://github.com/dataflowr/notebooks/blob/master/HW1/hw1_mlp.ipynb) and its solution [hw1_mlp_sol.ipynb](https://github.com/dataflowr/notebooks/blob/master/HW1/sol/hw1_mlp_sol.ipynb)\n- [**Module 3: Loss functions for classification**](https://dataflowr.github.io/website/modules/3-loss-functions-for-classification/)\n    - An explanation of underfitting and overfitting with polynomial regression [03_polynomial_regression.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module3/03_polynomial_regression.ipynb)\n- [**Module 4: Optimization for deep leaning**](https://dataflowr.github.io/website/modules/4-optimization-for-deep-learning/)\n    - Practical: code Adagrad, RMSProp, Adam, AMSGrad [04_gradient_descent_optimization_algorithms_empty.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module4/04_gradient_descent_optimization_algorithms_empty.ipynb) and its solution [04_gradient_descent_optimization_algorithms_sol.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module4/sol/04_gradient_descent_optimization_algorithms_sol.ipynb)\n- [**Module 5: Stacking layers**](https://dataflowr.github.io/website/modules/5-stacking-layers/)\n    - Practical: overfitting a MLP on CIFAR10 [Stacking_layers_MLP_CIFAR10.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module5/Stacking_layers_MLP_CIFAR10.ipynb) and its solution [MLP_CIFAR10.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module5/sol/MLP_CIFAR10.ipynb)\n- [**Module 6: Convolutional neural network**](https://dataflowr.github.io/website/modules/6-convolutional-neural-network/)\n    - Practical: build a simple digit recognizer with CNN [06_convolution_digit_recognizer.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module6/06_convolution_digit_recognizer.ipynb)\n- [**Homework 2: Class Activation Map and adversarial examples**](https://dataflowr.github.io/website/homework/2-CAM-adversarial/)\n    - [HW2_CAM_Adversarial.ipynb](https://github.com/dataflowr/notebooks/blob/master/HW2/HW2_CAM_Adversarial.ipynb)\n\n- [**Module 8: Embedding layers**](https://dataflowr.github.io/website/modules/8a-embedding-layers/), [**Collaborative filtering**](https://dataflowr.github.io/website/modules/8b-collaborative-filtering/) and [**Word2vec**](https://dataflowr.github.io/website/modules/8c-word2vec/)\n    - Practical: Collaborative filtering with Movielens 100k dataset [08_collaborative_filtering_empty.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module8/08_collaborative_filtering_empty.ipynb)\n    - Practical: Refactoring code, collaborative filtering with Movielens 1M dataset [08_collaborative_filtering_1M.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module8/08_collaborative_filtering_1M.ipynb)\n    - Practical: Word Embedding (word2vec) in PyTorch [08_Word2vec_pytorch_empty.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module8/08_Word2vec_pytorch_empty.ipynb)\n    - Finding Synonyms and Analogies with Glove [08_Playing_with_word_embedding.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module8/08_Playing_with_word_embedding.ipynb)\n- [**Module 9a: Autoencoders**](https://dataflowr.github.io/website/modules/9-autoencoders/)\n    - Practical: denoising autoencoder (with convolutions and transposed convolutions) [09_AE_NoisyAE.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module9/09_AE_NoisyAE.ipynb)\n- [**Module 9b - UNets**](https://dataflowr.github.io/website/modules/9b-unet/)\n  - UNet for image segmentation [UNet_image_seg.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module9/UNet_image_seg.ipynb)\n- [**Module 9c - Flows**](https://dataflowr.github.io/website/modules/9c-flows/) \n  - implementing Real NVP [Normalizing_flows_empty.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module9/Normalizing_flows_empty.ipynb) and its solution [Normalizing_flows_sol.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module9/Normalizing_flows_sol.ipynb)\n- [**Module 10 - Generative Adversarial Networks**](https://dataflowr.github.io/website/modules/10-generative-adversarial-networks/)\n  - Conditional GAN and InfoGAN [10_GAN_double_moon.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module10/10_GAN_double_moon.ipynb)\n- [**Module 11 - Recurrent Neural Networks**](https://dataflowr.github.io/website/modules/11b-recurrent-neural-networks-practice/) and [**Batches with sequences in Pytorch**](https://dataflowr.github.io/website/modules/11c-batches-with-sequences/)\n  - notebook used in the theory course: [11_RNN.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module11/11_RNN.ipynb)\n  - predicting engine failure with RNN [11_predictions_RNN_empty.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module11/11_predictions_RNN_empty.ipynb)\n- [**Module 12 - Attention and Transformers**](https://dataflowr.github.io/website/modules/12-attention/)\n  - Correcting the [PyTorch tutorial](https://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html) on attention in seq2seq: [12_seq2seq_attention.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module12/12_seq2seq_attention.ipynb) and its [solution](https://github.com/dataflowr/notebooks/blob/master/Module12/12_seq2seq_attention_solution.ipynb)\n  - building a simple transformer block and thinking like transformers: [GPT_hist.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module12/GPT_hist.ipynb) and its [solution](https://github.com/dataflowr/notebooks/blob/master/Module12/GPT_hist_sol.ipynb)\n- [**Module 13 - Siamese Networks and Representation Learning**](https://dataflowr.github.io/website/modules/13-siamese/)\n  - learning embeddings with contrastive loss: [13_siamese_triplet_mnist_empty.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module13/13_siamese_triplet_mnist_empty.ipynb) \n- [**Module 15 - Dropout**](https://dataflowr.github.io/website/modules/15-dropout/)\n  - Dropout on a toy dataset: [15a_dropout_intro.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module15/15a_dropout_intro.ipynb)\n  - playing with dropout on MNIST: [15b_dropout_mnist.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module15/15b_dropout_mnist.ipynb)\n- [**Module 16 - Batchnorm**](https://dataflowr.github.io/website/modules/16-batchnorm/)\n  - impact of batchnorm: [16_batchnorm_simple.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module16/16_batchnorm_simple.ipynb)\n  - Playing with batchnorm without any training: [16_simple_batchnorm_eval.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module16/16_simple_batchnorm_eval.ipynb)\n- [**Module 18a - Denoising Diffusion Probabilistic Models**](https://dataflowr.github.io/website/modules/18a-diffusion/)\n  - Denoising Diffusion Probabilistic Models for MNIST: [ddpm_nano_empty.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module18/ddpm_nano_empty.ipynb) and its solution [ddpm_nano_sol.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module18/ddpm_nano_sol.ipynb)\n  - Denoising Diffusion Probabilistic Models for CIFAR10: [ddpm_micro_sol.ipynb](https://github.com/dataflowr/notebooks/blob/master/Module18/ddpm_micro_sol.ipynb)\n- [**Module - Deep Learning on graphs**](https://dataflowr.github.io/website/modules/graph0/)\n  - Inductive bias in GCN: a spectral perspective [GCN_inductivebias_spectral.ipynb](https://github.com/dataflowr/notebooks/blob/master/graphs/GCN_inductivebias_spectral.ipynb) and for colab [GCN_inductivebias_spectral-colab.ipynb](https://github.com/dataflowr/notebooks/blob/master/graphs/GCN_inductivebias_spectral-colab.ipynb)\n  - Graph ConvNets in PyTorch [spectral_gnn.ipynb](https://github.com/dataflowr/notebooks/blob/master/graphs/spectral_gnn.ipynb)\n-  **NERF**\n   -  PyTorch Tiny NERF [tiny_nerf_extended.ipynb](https://github.com/dataflowr/notebooks/blob/master/nerf/tiny_nerf_extended.ipynb)\n\n\n## Usage\n\nIf you want to run locally, follow the instructions of [Module 0 - Running the notebooks locally](https://dataflowr.github.io/website/modules/0-sotfware-installation/)\n\n## 2020 version of the course\nArchives are available on the archive-2020 branch.\n","project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdataflowr%2Fnotebooks","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdataflowr%2Fnotebooks","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdataflowr%2Fnotebooks/lists"}