{"id":45180,"url":"https://github.com/tensorlayer/awesome-tensorlayer","name":"awesome-tensorlayer","description":"A curated list of dedicated resources and applications","projects_count":51,"last_synced_at":"2026-04-04T17:00:19.267Z","repository":{"id":87821103,"uuid":"132353608","full_name":"tensorlayer/awesome-tensorlayer","owner":"tensorlayer","description":"A curated list of dedicated resources and applications","archived":false,"fork":false,"pushed_at":"2020-01-15T09:17:47.000Z","size":97,"stargazers_count":269,"open_issues_count":1,"forks_count":58,"subscribers_count":13,"default_branch":"master","last_synced_at":"2026-03-07T17:50:48.733Z","etag":null,"topics":["adversarial-learning","autoencoder","cifar-10","computer-vision","convolutional-neural-networks","database","generative-adversarial-network","horovod","keras","lstm-neural-networks","mnist","natural-language-processing","recurrent-neural-networks","reinforcement-learning","segmentation","tensorflow","tensorflow-tutorials","tensorlayer","tf-slim","tflearn"],"latest_commit_sha":null,"homepage":"","language":null,"has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"cc0-1.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/tensorlayer.png","metadata":{"files":{"readme":"readme.md","changelog":null,"contributing":"contributing.md","funding":null,"license":"license.md","code_of_conduct":"code-of-conduct.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null}},"created_at":"2018-05-06T15:53:26.000Z","updated_at":"2025-12-09T13:25:51.000Z","dependencies_parsed_at":"2023-03-16T08:15:20.173Z","dependency_job_id":null,"html_url":"https://github.com/tensorlayer/awesome-tensorlayer","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/tensorlayer/awesome-tensorlayer","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorlayer%2Fawesome-tensorlayer","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorlayer%2Fawesome-tensorlayer/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorlayer%2Fawesome-tensorlayer/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorlayer%2Fawesome-tensorlayer/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/tensorlayer","download_url":"https://codeload.github.com/tensorlayer/awesome-tensorlayer/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorlayer%2Fawesome-tensorlayer/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31407359,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-04T10:20:44.708Z","status":"ssl_error","status_checked_at":"2026-04-04T10:20:06.846Z","response_time":60,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"readme":"# Awesome Tensorlayer - A curated list of dedicated resources\n\n\u003ca href=\"https://tensorlayer.readthedocs.io/en/stable/\"\u003e\n\u003cdiv align=\"center\"\u003e\n\t\u003cimg src=\"https://raw.githubusercontent.com/tensorlayer/tensorlayer/master/img/tl_transparent_logo.png\" width=\"50%\" height=\"30%\"/\u003e\n\u003c/div\u003e\n\u003c/a\u003e\n\n[![Awesome](https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg)](https://github.com/sindresorhus/awesome)\n[![Build Status](https://api.travis-ci.org/tensorlayer/awesome-tensorlayer.svg?branch=master)](https://travis-ci.org/tensorlayer/awesome-tensorlayer)\n\nYou have just found TensorLayer! High performance DL and RL library for industry and academic.\n\n## Contribute\n\nContributions welcome! Read the [contribution guidelines](contributing.md) first.\n\n\u003c!---\n## Contents\n- [1. Basics Examples](#1-basics-examples)\n- [2. Computer Vision](#2-computer-vision)\n- [3. Natural Language Processing](#3-natural-language-processing)\n- [4. Reinforcement Learning](#4-reinforcement-learning)\n- [5. Adversarial Learning](#5-adversarial-learning)\n- [6. Pretrained Models](#6-pretrained-models)\n- [7. Auto Encoders](#7-auto-encoders)\n- [8. Data and Model Managment Tools](#8-data-and-model-managment-tools)\n--\u003e\n\n## 1. Basics Examples\n\n### 1.1 MNIST and CIFAR10 \n\nTensorLayer can define models in two ways.\nStatic model allows you to build model in a fluent way while dynamic model allows you to fully control the forward process.\nPlease read this [DOCS](https://tensorlayer.readthedocs.io/en/latest/user/get_start_model.html#) before you start.\n\n- [MNIST Simplest Example](https://github.com/tensorlayer/tensorlayer/blob/master/examples/basic_tutorials/tutorial_mnist_simple.py)\n- [MNIST Static Example](https://github.com/tensorlayer/tensorlayer/blob/master/examples/basic_tutorials/tutorial_mnist_mlp_static.py)\n- [MNIST Static Example for Reused Model](https://github.com/tensorlayer/tensorlayer/blob/master/examples/basic_tutorials/tutorial_mnist_mlp_static_2.py)\n- [MNIST Dynamic Example](https://github.com/tensorlayer/tensorlayer/blob/master/examples/basic_tutorials/tutorial_mnist_mlp_dynamic.py)\n- [MNIST Dynamic Example for Seperated Models](https://github.com/tensorlayer/tensorlayer/blob/master/examples/basic_tutorials/tutorial_mnist_mlp_dynamic_2.py)\n- [MNIST Static Siamese Model Example](https://github.com/tensorlayer/tensorlayer/blob/master/examples/basic_tutorials/tutorial_mnist_siamese.py)\n- [CIFAR10 Static Example with Data Augmentation](https://github.com/tensorlayer/tensorlayer/blob/master/examples/basic_tutorials/tutorial_cifar10_cnn_static.py)\n\n### 1.2 DatasetAPI and TFRecord Examples\n\n- [Downloading and Preprocessing PASCAL VOC](https://github.com/tensorlayer/tensorlayer/blob/master/examples/data_process/tutorial_tf_dataset_voc.py) with TensorLayer VOC data loader. [知乎文章](https://zhuanlan.zhihu.com/p/31466173)\n- [Read and Save data in TFRecord Format](https://github.com/tensorlayer/tensorlayer/blob/master/examples/data_process/tutorial_tfrecord.py).\n- [Read and Save time-series data in TFRecord Format](https://github.com/tensorlayer/tensorlayer/blob/master/examples/data_process/tutorial_tfrecord3.py).\n- [Convert CIFAR10 in TFRecord Format for performance optimization](https://github.com/tensorlayer/tensorlayer/blob/master/examples/data_process/tutorial_tfrecord2.py).\n- More dataset loader can be found in [tl.files.load_xxx](https://tensorlayer.readthedocs.io/en/latest/modules/files.html#load-dataset-functions)\n\n## 2. General Computer Vision\n\n- [Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization](https://github.com/tensorlayer/adaptive-style-transfer)\n- [OpenPose: Real-time multi-person keypoint detection](https://github.com/tensorlayer/openpose-plus)\n- [InsignFace](https://github.com/auroua/InsightFace_TF) - Additive Angular Margin Loss for Deep Face Recognition\n- [Spatial-Transformer-Nets (STN)](https://github.com/zsdonghao/Spatial-Transformer-Nets) trained on MNIST dataset based on the paper by [[M. Jaderberg et al, 2015]](https://arxiv.org/abs/1506.02025).\n- [U-Net Brain Tumor Segmentation](https://github.com/zsdonghao/u-net-brain-tumor) trained on BRATS 2017 dataset based on the paper by [[M. Jaderberg et al, 2015]](https://arxiv.org/abs/1705.03820) with some modifications.\n- [Image2Text: im2txt](https://github.com/zsdonghao/Image-Captioning) based on the paper by [[O. Vinyals et al, 2016]](https://arxiv.org/abs/1609.06647).\n- More Computer Vision Application can be found in [Adversarial Learning Section](#5-adversarial-learning)\n\n## 3. Quantization Networks\n\nSee [examples/quantized_net](https://github.com/tensorlayer/tensorlayer/tree/master/examples/quantized_net).\n\n- [Binary Networks](https://arxiv.org/abs/1602.02830) works on [mnist](https://github.com/tensorlayer/tensorlayer/blob/master/examples/quantized_net/tutorial_binarynet_mnist_cnn.py) and  [cifar10](https://github.com/tensorlayer/tensorlayer/blob/master/examples/quantized_net/tutorial_binarynet_cifar10_tfrecord.py).\n- [Ternary Network](https://arxiv.org/abs/1605.04711) works on [mnist](https://github.com/tensorlayer/tensorlayer/blob/master/examples/quantized_net/tutorial_ternaryweight_mnist_cnn.py) and [cifar10](https://github.com/tensorlayer/tensorlayer/blob/master/examples/quantized_net/tutorial_ternaryweight_cifar10_tfrecord.py).\n- [DoReFa-Net](https://arxiv.org/abs/1606.06160) works on [mnist](https://github.com/tensorlayer/tensorlayer/blob/master/examples/quantized_net/tutorial_dorefanet_mnist_cnn.py) and [cifar10](https://github.com/tensorlayer/tensorlayer/blob/master/examples/quantized_net/tutorial_dorefanet_cifar10_tfrecord.py).\n- [Quantization For Efficient Integer-Arithmetic-Only Inference](https://arxiv.org/abs/1712.05877) works on [mnist](https://github.com/tensorlayer/tensorlayer/blob/master/examples/quantized_net/tutorial_quanconv_mnist.py) and [cifar10](https://github.com/tensorlayer/tensorlayer/blob/master/examples/quantized_net/tutorial_quanconv_cifar10.py).\n\n## 4. GAN\n\n- [DCGAN](https://github.com/tensorlayer/dcgan) trained on the CelebA dataset based on the paper by [[A. Radford et al, 2015]](https://arxiv.org/abs/1511.06434).\n- [CycleGAN](https://github.com/tensorlayer/cyclegan) improved with resize-convolution based on the paper by [[J. Zhu et al, 2017]](https://arxiv.org/abs/1703.10593).\n- [SRGAN](https://github.com/tensorlayer/srgan) - A Super Resolution GAN based on the paper by [[C. Ledig et al, 2016]](https://arxiv.org/abs/1609.04802).\n- [DAGAN](https://github.com/nebulaV/DAGAN): Fast Compressed Sensing MRI Reconstruction based on the paper by [[G. Yang et al, 2017]](https://doi.org/10.1109/TMI.2017.2785879).\n- [GAN-CLS for Text to Image Synthesis](https://github.com/zsdonghao/text-to-image) based on the paper by [[S. Reed et al, 2016]](https://arxiv.org/abs/1605.05396)\n- [Unsupervised Image-to-Image Translation with Generative Adversarial Networks](https://arxiv.org/abs/1701.02676), [code](https://github.com/zsdonghao/Unsup-Im2Im)\n- [BEGAN](https://github.com/2wins/BEGAN-tensorlayer): Boundary Equilibrium Generative Adversarial Networks based on the paper by [[D. Berthelot et al, 2017]](https://arxiv.org/abs/1703.10717).\n- [BiGAN](https://github.com/YOUSIKI/BiGAN.TensorLayer) Adversarial Feature Learning\n- [Attention CycleGAN](https://github.com/Hermera/Unsupervised-Attention-guidedImage-to-Image-Translation): Unsupervised Attention-guided Image-to-Image Translation\n- [MoCoGAN](https://github.com/Zyl-000/Project_MoCoGAN) Decomposing Motion and Content for Video Generation\n- [InfoGAN](https://github.com/lisc55/InfoGAN): Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets, 2016\n- [Lifelong GAN](https://github.com/ChillingDream/Lifelong-Gan): Continual Learning for Conditional Image Generation, 2019, ICCV\n\n## 5. Natural Language Processing\n\n### 5.1 ChatBot\n\n- [Seq2Seq Chatbot](https://github.com/tensorlayer/seq2seq-chatbot)  in 200 lines of code for [Seq2Seq](https://tensorlayer.readthedocs.io/en/latest/modules/layers.html#simple-seq2seq).\n\n### 5.2 Text Generation\n\n- [Text Generation with LSTMs](https://github.com/tensorlayer/tensorlayer/blob/master/examples/text_generation/tutorial_generate_text.py) - Generating Trump Speech.\n- Modelling PennTreebank [code1](https://github.com/tensorlayer/tensorlayer/blob/master/examples/text_ptb/tutorial_ptb_lstm.py) and [code2](https://github.com/tensorlayer/tensorlayer/blob/master/examples/text_ptb/tutorial_ptb_lstm_state_is_tuple.py), see [blog post](http://karpathy.github.io/2015/05/21/rnn-effectiveness/).\n\n### 5.3 Text Classification\n\n- [FastText Classifier](https://github.com/tensorlayer/tensorlayer/blob/master/examples/text_classification/tutorial_imdb_fasttext.py) running on the IMDB dataset based on the paper by [[A. Joulin et al, 2016]](https://arxiv.org/abs/1607.01759).\n\n### 5.4 Word Embedding\n\n- [Minimalistic Implementation of Word2Vec](https://github.com/tensorlayer/tensorlayer/blob/master/examples/text_word_embedding/tutorial_word2vec_basic.py) based on the paper by [[T. Mikolov et al, 2013]](https://arxiv.org/abs/1310.4546).\n\n### 5.5 Spam Detection\n\n- [Chinese Spam Detector](https://github.com/pakrchen/text-antispam).\n\n## 6. Reinforcement Learning\n\n- [DRL Tutorial for Academic](https://github.com/tensorlayer/tensorlayer/tree/master/examples/reinforcement_learning)\n- [DRL Zoo for Industry](https://github.com/tensorlayer/RLzoo)\n\n\n## 7. (Variational) Autoencoders\n\n- [Variational Autoencoder](https://github.com/yzwxx/vae-celebA) trained on the CelebA dataset.\n- [Variational Autoencoder](https://github.com/BUPTLdy/tl-vae) trained on the MNIST dataset.\n\n\n## 8. Pretrained Models\n\n- The guideline of using pretrained models is [here](https://tensorlayer.readthedocs.io/en/latest/user/get_start_advance.html#pre-trained-cnn).\n\n## 9. Data and Model Managment Tools\n\n- [Why Database?](https://tensorlayer.readthedocs.io/en/stable/modules/db.html).\n- Put Tasks into Database and Execute on Other Agents, see [code](https://github.com/tensorlayer/tensorlayer/tree/master/examples/database).\n- TensorDB applied on Pong Game on OpenAI Gym: [Trainer File](https://github.com/akaraspt/tl_paper/blob/master/tutorial_tensordb_atari_pong_trainer.py) and [Generator File](https://github.com/akaraspt/tl_paper/blob/master/tutorial_tensordb_atari_pong_generator.py) based on the following [blog post](http://karpathy.github.io/2016/05/31/rl/).\n- TensorDB applied to classification task on MNIST dataset: [Master File](https://github.com/akaraspt/tl_paper/blob/master/tutorial_tensordb_cv_mnist_master.py) and [Worker File](https://github.com/akaraspt/tl_paper/blob/master/tutorial_tensordb_cv_mnist_worker.py).\n\n\n## How to cite TL in Research Papers ?\nIf you find this project useful, we would be grateful if you cite the TensorLayer paper：\n\n```\n@article{tensorlayer2017,\n    author  = {Dong, Hao and Supratak, Akara and Mai, Luo and Liu, Fangde and Oehmichen, Axel and Yu, Simiao and Guo, Yike},\n    journal = {ACM Multimedia},\n    title   = {{TensorLayer: A Versatile Library for Efficient Deep Learning Development}},\n    url     = {http://tensorlayer.org},\n    year    = {2017}\n}\n```\n\n\n# **ENJOY**\n","created_at":"2024-01-13T21:19:04.961Z","updated_at":"2026-04-04T17:00:19.267Z","primary_language":null,"list_of_lists":false,"displayable":true,"categories":["3. Quantization Networks","1. Basics Examples","4. GAN","5. Natural Language Processing","6. Reinforcement Learning","8. Pretrained Models","9. Data and Model Managment Tools","2. General Computer Vision","7. (Variational) Autoencoders"],"sub_categories":["1.2 DatasetAPI and TFRecord Examples","1.1 MNIST and CIFAR10","5.2 Text Generation","5.3 Text Classification","5.4 Word Embedding","5.5 Spam Detection","5.1 ChatBot"],"projects_url":"https://awesome.ecosyste.ms/api/v1/lists/tensorlayer%2Fawesome-tensorlayer/projects"}