{"id":27026878,"url":"https://github.com/serengil/tensorflow-101","last_synced_at":"2025-05-14T15:09:55.199Z","repository":{"id":37819967,"uuid":"96311842","full_name":"serengil/tensorflow-101","owner":"serengil","description":"TensorFlow 101: Introduction to Deep Learning","archived":false,"fork":false,"pushed_at":"2025-03-31T11:36:20.000Z","size":57530,"stargazers_count":1089,"open_issues_count":0,"forks_count":632,"subscribers_count":46,"default_branch":"master","last_synced_at":"2025-04-12T01:55:49.886Z","etag":null,"topics":["age-prediction","autoencoders","automl","celebrity-recognition","convolutional-neural-networks","deep-learning","deepface","emotion-analysis","face-recognition","facenet","facial-expression-recognition","gender-prediction","machine-learning","neural-networks","openface","python","style-transfer","tensorflow","transfer-learning","vgg-face"],"latest_commit_sha":null,"homepage":"https://www.youtube.com/watch?v=YjYIMs5ZOfc\u0026list=PLsS_1RYmYQQGxpKV44jsxXNgjEpRoW61w\u0026index=2","language":"Jupyter Notebook","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/serengil.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":".github/FUNDING.yml","license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null},"funding":{"github":"serengil","patreon":"serengil","buy_me_a_coffee":"serengil"}},"created_at":"2017-07-05T11:26:33.000Z","updated_at":"2025-04-08T06:16:51.000Z","dependencies_parsed_at":"2024-06-10T21:00:13.491Z","dependency_job_id":"db3bac23-c3e7-4023-943b-69dda9d97c56","html_url":"https://github.com/serengil/tensorflow-101","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/serengil%2Ftensorflow-101","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/serengil%2Ftensorflow-101/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/serengil%2Ftensorflow-101/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/serengil%2Ftensorflow-101/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/serengil","download_url":"https://codeload.github.com/serengil/tensorflow-101/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248505865,"owners_count":21115354,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["age-prediction","autoencoders","automl","celebrity-recognition","convolutional-neural-networks","deep-learning","deepface","emotion-analysis","face-recognition","facenet","facial-expression-recognition","gender-prediction","machine-learning","neural-networks","openface","python","style-transfer","tensorflow","transfer-learning","vgg-face"],"created_at":"2025-04-04T23:16:15.247Z","updated_at":"2025-04-12T01:56:03.563Z","avatar_url":"https://github.com/serengil.png","language":"Jupyter Notebook","readme":"# TensorFlow 101: Introduction to Deep Learning\n\n[![Stars](https://img.shields.io/github/stars/serengil/tensorflow-101)](https://github.com/serengil/tensorflow-101)\n[![License](http://img.shields.io/:license-MIT-green.svg?style=flat)](https://github.com/serengil/tensorflow-101/blob/master/LICENSE)\n[![Patreon](https://img.shields.io/:become-patron-f96854.svg?style=flat\u0026logo=patreon)](https://www.patreon.com/serengil?repo=tensorflow101)\n[![GitHub Sponsors](https://img.shields.io/github/sponsors/serengil?logo=GitHub\u0026color=lightgray)](https://github.com/sponsors/serengil)\n[![Buy Me a Coffee](https://img.shields.io/badge/-buy_me_a%C2%A0coffee-gray?logo=buy-me-a-coffee)](https://buymeacoffee.com/serengil)\n\n[![Blog](https://img.shields.io/:blog-sefiks.com-blue.svg?style=flat\u0026logo=wordpress)](https://sefiks.com)\n[![YouTube](https://img.shields.io/:youtube-@sefiks-red.svg?style=flat\u0026logo=youtube)](https://www.youtube.com/@sefiks?sub_confirmation=1)\n[![Twitter](https://img.shields.io/:follow-@serengil-blue.svg?style=flat\u0026logo=x)](https://twitter.com/intent/user?screen_name=serengil)\n\nI have worked all my life in Machine Learning, and **I've never seen one algorithm knock over its benchmarks like Deep Learning** - Andrew Ng\n\nThis repository includes deep learning based project implementations I've done from scratch. You can find both the source code and documentation as a step by step tutorial. Model structrues and pre-trained weights are shared as well.\n\n**Facial Expression Recognition** [`Code`](python/facial-expression-recognition.py), [`Tutorial`](https://sefiks.com/2018/01/01/facial-expression-recognition-with-keras/)\n\nThis is a custom CNN model. Kaggle [FER 2013](https://www.kaggle.com/c/challenges-in-representation-learning-facial-expression-recognition-challenge/data) data set is fed to the model. This model runs fast and produces satisfactory results. It can be also run real time as well.\n\n\u003cp align=\"center\"\u003e\u003cimg src=\"https://sefiks.com/wp-content/uploads/2017/12/pablo-facial-expression.png\" width=\"70%\" height=\"70%\"\u003e\u003c/p\u003e\n\nWe can run emotion analysis in real time as well [`Real Time Code`](https://github.com/serengil/tensorflow-101/blob/master/python/emotion-analysis-from-video.py), [`Video`](https://youtu.be/Dm5ptTiIpkk)\n\n\u003cp align=\"center\"\u003e\u003cimg src=\"https://sefiks.com/wp-content/uploads/2019/01/real-time-emotion-mark.png\" width=\"70%\" height=\"70%\"\u003e\u003c/p\u003e\n\n**Face Recognition** [`Code`](https://github.com/serengil/tensorflow-101/blob/master/python/vgg-face.ipynb), [`Tutorial`](https://sefiks.com/2018/08/06/deep-face-recognition-with-keras/)\n\nFace recognition is mainly based on convolutional neural networks. We feed two face images to a CNN model and it returns a multi-dimensional vector representations. We then compare these representations to determine these two face images are same person or not. \n\n\u003cp align=\"center\"\u003e\u003cimg src=\"https://sefiks.com/wp-content/uploads/2019/01/face-recognition-demo.png\" width=\"70%\" height=\"70%\"\u003e\u003c/p\u003e\n\nYou can find the most popular face recognition models below.\n\n| Model | Creator | LFW Score | Code | Tutorial |\n| ---   | --- | --- | ---  | --- |\n| VGG-Face | The University of Oxford | 98.78 | [`Code`](https://github.com/serengil/tensorflow-101/blob/master/python/vgg-face.ipynb) | [`Tutorial`](https://sefiks.com/2018/08/06/deep-face-recognition-with-keras/) |\n| FaceNet | Google | 99.65 | [`Code`](https://github.com/serengil/tensorflow-101/blob/master/python/vgg-face.ipynb) | [`Tutorial`](https://sefiks.com/2018/08/06/deep-face-recognition-with-keras/) |\n| DeepFace | Facebook | - | [`Code`](https://github.com/serengil/tensorflow-101/blob/master/python/Facebook-Deepface.ipynb) | [`Tutorial`](https://sefiks.com/2020/02/17/face-recognition-with-facebook-deepface-in-keras/) |\n| OpenFace | Carnegie Mellon University | 93.80 | [`Code`](https://github.com/serengil/tensorflow-101/blob/master/python/openface.ipynb) | [`Tutorial`](https://sefiks.com/2019/07/21/face-recognition-with-openface-in-keras/) |\n| DeepID | The Chinese University of Hong Kong | - | [`Code`](https://github.com/serengil/tensorflow-101/blob/master/python/DeepID.ipynb) | [`Tutorial`](https://sefiks.com/2020/06/16/face-recognition-with-deepid-in-keras/) |\n| Dlib | Davis E. King | 99.38 | [`Code`](https://github.com/serengil/tensorflow-101/blob/master/python/Dlib-Face-Recognition.ipynb) | [`Tutorial`](https://sefiks.com/2020/07/11/face-recognition-with-dlib-in-python/) |\n| OpenCV | OpenCV Foundation | - | [`Code`](https://github.com/serengil/tensorflow-101/blob/master/python/opencv-face-recognition.ipynb) | [`Tutorial`](https://sefiks.com/2020/07/14/a-beginners-guide-to-face-recognition-with-opencv-in-python/) |\n| OpenFace in OpenCV | Carnegie Mellon University | 92.92 | [`Code`](https://github.com/serengil/tensorflow-101/blob/master/python/opencv-dnn-face-recognition.ipynb) | [`Tutorial`](https://sefiks.com/2020/07/24/face-recognition-with-opencv-dnn-in-python/) |\n| SphereFace | Georgia Institute of Technology | 99.30 | [`Code`](https://github.com/serengil/tensorflow-101/blob/master/python/SphereFace.ipynb) | [`Tutorial`](https://sefiks.com/2020/10/19/face-recognition-with-sphereface-in-python/) |\n| ArcFace | Imperial College London | 99.40 | [`Code`](https://github.com/serengil/tensorflow-101/blob/master/python/ArcFace.ipynb) | [`Tutorial`](https://sefiks.com/2020/12/14/deep-face-recognition-with-arcface-in-keras-and-python/) |\n\nAll of those state-of-the-art face recognition models are wrapped in [deepface library for python](https://github.com/serengil/deepface). You can build and run them with a few lines of code. To have more information, please visit the [repo](https://github.com/serengil/deepface) of the library.\n\n**Real Time Deep Face Recognition Implementation** [`Code`](https://github.com/serengil/tensorflow-101/blob/master/python/deep-face-real-time.py), [`Video`](https://www.youtube.com/watch?v=tSU_lNi0gQQ)\n\nThese are the real time implementations of the common face recognition models we've mentioned in the previous section. VGG-Face has the highest face recognition score but it comes with the high complexity among models. On the other hand, OpenFace is a pretty model and it has a close accuracy to VGG-Face but its simplicity offers high speed than others.\n\n\u003cp align=\"center\"\u003e\u003cimg src=\"https://sefiks.com/wp-content/uploads/2020/02/deepface-cover.jpg\" width=\"90%\" height=\"90%\"\u003e\u003c/p\u003e\n\n| Model | Creator | Code | Demo |\n| ---   | --- | ---  | --- |\n| VGG-Face | Oxford University | [`Code`](https://github.com/serengil/tensorflow-101/blob/master/python/deep-face-real-time.py) | [`Video`](https://www.youtube.com/watch?v=tSU_lNi0gQQ) |\n| FaceNet | Google | [`Code`](https://github.com/serengil/tensorflow-101/blob/master/python/facenet-real-time.py) | [`Video`](https://youtu.be/vB1I5vWgTQg) |\n| DeepFace | Facebook | [`Code`](https://github.com/serengil/tensorflow-101/blob/master/python/fb-deepface-real-time.py) | [`Video`](https://youtu.be/YjYIMs5ZOfc) |\n| OpenFace | Carnegie Mellon University | [`Code`](https://github.com/serengil/tensorflow-101/blob/master/python/openface-real-time.py) | [`Video`](https://youtu.be/-4z2sL6wzP8) |\n\n**Large Scale Face Recognition**\n\nFace recognition requires to apply face verification several times. It has a O(n) time complexity and it would be problematic for very large scale data sets (millions or billions level data). Herein, if you have a really strong database, then you use relational databases and regular SQL. Besides, you can store facial embeddings in nosql databases. In this way, you can have the power of the map reduce technology. Besides, approximate nearest neighbor (a-nn) algorithm reduces time complexity dramatically. Spotify Annoy, Facebook Faiss and NMSLIB are amazing a-nn libraries. Besides, Elasticsearch wraps NMSLIB and it also offers highly scalablity. You should build and run face recognition models within those a-nn libraries if you have really large scale data sets.\n\n| Library | Algorithm | Tutorial | Code | Demo |\n| --- | --- | --- | --- | --- | \n| Spotify Annoy | a-nn | [`Tutorial`](https://sefiks.com/2020/09/16/large-scale-face-recognition-with-spotify-annoy/) | - | [`Video`](https://youtu.be/Jpxm914o2xk) |\n| Facebook Faiss | a-nn | [`Tutorial`](https://sefiks.com/2020/09/17/large-scale-face-recognition-with-facebook-faiss/) | - | - |\n| NMSLIB | a-nn | [`Tutorial`](https://sefiks.com/2020/09/19/large-scale-face-recognition-with-nmslib/) | [`Code`](https://github.com/serengil/tensorflow-101/blob/master/python/nmslib-fast-search.ipynb) | - |\n| Elasticsearch | a-nn | [`Tutorial`](https://sefiks.com/2020/11/27/large-scale-face-recognition-with-elasticsearch/) | [`Code`](https://github.com/serengil/tensorflow-101/blob/master/python/Elastic-Face.ipynb) | [`Video`](https://youtu.be/i4GvuOmzKzo) |\n| mongoDB | k-NN | [`Tutorial`](https://sefiks.com/2021/01/22/deep-face-recognition-with-mongodb/) | [`Code`](https://sefiks.com/2021/01/22/deep-face-recognition-with-mongodb/) | - |\n| Cassandra | k-NN | [`Tutorial`](https://sefiks.com/2021/01/24/deep-face-recognition-with-cassandra/) | [`Code`](https://github.com/serengil/tensorflow-101/blob/master/python/Cassandra-Face-Recognition.ipynb) | [`Video`](https://youtu.be/VQqHs6-4Ylg) |\n| Redis | k-NN | [`Tutorial`](https://sefiks.com/2021/03/02/deep-face-recognition-with-redis/) | [`Code`](https://github.com/serengil/tensorflow-101/blob/master/python/Redis-Face-Recognition.ipynb) | [`Video`](https://youtu.be/eo-fTv4eYzo) |\n| Hadoop | k-NN | [`Tutorial`](https://sefiks.com/2021/01/31/deep-face-recognition-with-hadoop-and-spark/) | [`Code`](https://github.com/serengil/tensorflow-101/blob/master/python/PySpark-Face-Recognition.ipynb) | - |\n| Relational Database | k-NN | [`Tutorial`](https://sefiks.com/2021/02/06/deep-face-recognition-with-sql/) | [`Code`](https://github.com/serengil/tensorflow-101/blob/master/python/Face-Recognition-SQL.ipynb) | - |\n| Neo4j Graph| k-NN | [`Tutorial`](https://sefiks.com/2021/04/03/deep-face-recognition-with-neo4j/) | [`Code`](https://github.com/serengil/tensorflow-101/blob/master/python/Neo4j-Face-Recognition.ipynb) | [`Video`](https://youtu.be/X-hB2kBFBXs) |\n\n**Apparent Age and Gender Prediction** [`Tutorial`](https://sefiks.com/2019/02/13/apparent-age-and-gender-prediction-in-keras/), [`Code for age`](https://github.com/serengil/tensorflow-101/blob/master/python/apparent_age_prediction.ipynb), [`Code for gender`](https://github.com/serengil/tensorflow-101/blob/master/python/gender_prediction.ipynb)\n\nWe've used VGG-Face model for apparent age prediction this time. We actually applied transfer learning. Locking the early layers' weights enables to have outcomes fast. \n\n\u003cp align=\"center\"\u003e\u003cimg src=\"https://sefiks.com/wp-content/uploads/2019/10/age-prediction-for-godfather-original.png\" width=\"70%\" height=\"70%\"\u003e\u003c/p\u003e\n\nWe can run age and gender prediction in real time as well [`Real Time Code`](https://github.com/serengil/tensorflow-101/blob/master/python/age-gender-prediction-real-time.py), [`Video`](https://youtu.be/tFI7vZn3P7E)\n\n\u003cp align=\"center\"\u003e\u003cimg src=\"https://sefiks.com/wp-content/uploads/2019/02/age-real-time.jpg\" width=\"50%\" height=\"50%\"\u003e\u003c/p\u003e\n\n**Celebrity You Look-Alike Face Recognition** [`Code`](https://github.com/serengil/tensorflow-101/blob/master/python/Find-Look-Alike-Celebrities.ipynb), [`Tutorial`](https://sefiks.com/2019/05/05/celebrity-look-alike-face-recognition-with-deep-learning-in-keras/)\n\nApplying VGG-Face recognition technology for [imdb data set](https://data.vision.ee.ethz.ch/cvl/rrothe/imdb-wiki/) will find your celebrity look-alike if you discard the threshold in similarity score.\n\n\u003cp align=\"center\"\u003e\u003cimg src=\"https://sefiks.com/wp-content/uploads/2019/05/sefik-looks-alike-colin-hanks.jpg\" width=\"50%\" height=\"50%\"\u003e\u003c/p\u003e\n\nThis can be run in real time as well [`Real Time Code`](https://github.com/serengil/tensorflow-101/blob/master/python/celebrity-look-alike-real-time.py), [`Video`](https://youtu.be/RMgIKU1H8DY)\n\n\u003cp align=\"center\"\u003e\u003cimg src=\"https://sefiks.com/wp-content/uploads/2019/05/celebrity-look-alike-real-time.jpg\" width=\"70%\" height=\"70%\"\u003e\u003c/p\u003e\n\n**Race and Ethnicity Prediction** \n[`Tutorial`](https://sefiks.com/2019/11/11/race-and-ethnicity-prediction-in-keras/), [`Code`](https://github.com/serengil/tensorflow-101/blob/master/python/Race-Ethnicity-Prediction-Batch.ipynb), [`Real Time Code`](https://github.com/serengil/tensorflow-101/blob/master/python/real-time-ethnicity-prediction.py), [`Video`](https://youtu.be/-ztiy5eJha8)\n\nEthnicity is a facial attribute as well and we can predict it from facial photos. We customize VGG-Face and we also applied transfer learning to classify 6 different ethnicity groups.\n\n\u003cp align=\"center\"\u003e\u003cimg src=\"https://i0.wp.com/sefiks.com/wp-content/uploads/2019/11/FairFace-testset.png\" width=\"70%\" height=\"70%\"\u003e\u003c/p\u003e\n\n**Beauty Score Prediction** [`Tutorial`](https://sefiks.com/2019/12/25/beauty-score-prediction-with-deep-learning/), [`Code`](https://github.com/serengil/tensorflow-101/blob/master/python/Beauty.ipynb)\n\nSouth China University of Technology published a research paper about facial beauty prediction. They also [open-sourced](https://github.com/HCIILAB/SCUT-FBP5500-Database-Release) the data set. 60 labelers scored the beauty of 5500 people. We will build a regressor to find facial beauty score. We will also test the built regressor on a huge imdb data set to find the most beautiful ones.\n\n\u003cp align=\"center\"\u003e\u003cimg src=\"https://sefiks.com/wp-content/uploads/2020/01/beauty-imdb-v2.png\" width=\"70%\" height=\"70%\"\u003e\u003c/p\u003e\n\n**Attractiveness Score Prediction** [`Tutorial`](https://sefiks.com/2020/01/22/attractiveness-score-prediction-with-deep-learning/), [`Code`](https://github.com/serengil/tensorflow-101/blob/master/python/Attracticeness.ipynb)\n\nThe University of Chicago open-sourced the Chicago Face Database. The database consists of 1200 facial photos of 600 people. Facial photos are also labeled with attractiveness and babyface scores by hundreds of volunteer markers. So, we've built a machine learning model to generalize attractiveness score based on a facial photo.\n\n\u003cp align=\"center\"\u003e\u003cimg src=\"https://sefiks.com/wp-content/uploads/2020/01/attractiveness-cover-2.png\" width=\"70%\" height=\"70%\"\u003e\u003c/p\u003e\n\n**Making Arts with Deep Learning: Artistic Style Transfer** [`Code`](https://github.com/serengil/tensorflow-101/blob/master/python/style-transfer.ipynb), [`Tutorial`](https://sefiks.com/2018/07/20/artistic-style-transfer-with-deep-learning/), [`Video`](https://youtu.be/QKCcJVJ0DZA)\n\nWhat if Vincent van Gogh had painted Istanbul Bosporus? Today we can answer this question. A deep learning technique named [artistic style transfer](https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Gatys_Image_Style_Transfer_CVPR_2016_paper.pdf) enables to transform ordinary images to masterpieces.\n\n\u003cp align=\"center\"\u003e\u003cimg src=\"https://i2.wp.com/sefiks.com/wp-content/uploads/2019/01/gsu_vincent.png\" width=\"70%\" height=\"70%\"\u003e\u003c/p\u003e\n\n**Autoencoder and clustering** [`Code`](https://github.com/serengil/tensorflow-101/blob/master/python/Autoencoder.ipynb), [`Tutorial`](https://sefiks.com/2018/03/21/autoencoder-neural-networks-for-unsupervised-learning/)\n\nWe can use neural networks to represent data. If you design a neural networks model symmetric about the centroid and you can restore a base data with an acceptable loss, then output of the centroid layer can represent the base data. Representations can contribute any field of deep learning such as face recognition, style transfer or just clustering.\n\n\u003cp align=\"center\"\u003e\u003cimg src=\"https://i0.wp.com/sefiks.com/wp-content/uploads/2018/03/autoencoder-and-autodecoder.png\" width=\"70%\" height=\"70%\"\u003e\u003c/p\u003e\n\n**Convolutional Autoencoder and clustering** [`Code`](https://github.com/serengil/tensorflow-101/blob/master/python/ConvolutionalAutoencoder.ipynb), [`Tutorial`](https://sefiks.com/2018/03/23/convolutional-autoencoder-clustering-images-with-neural-networks/)\n\nWe can adapt same representation approach to convolutional neural networks, too.\n\n**Transfer Learning: Consuming InceptionV3 to Classify Cat and Dog Images in Keras** [`Code`](https://github.com/serengil/tensorflow-101/blob/master/python/transfer_learning.py), [`Tutorial`](https://sefiks.com/2017/12/10/transfer-learning-in-keras-using-inception-v3/)\n\nWe can have the outcomes of the other researchers effortlessly. Google researchers compete on Kaggle Imagenet competition. They got 97% accuracy. We will adapt Google's Inception V3 model to classify objects.\n\n\u003cp align=\"center\"\u003e\u003cimg src=\"https://i2.wp.com/sefiks.com/wp-content/uploads/2017/12/inceptionv3-result.png\" width=\"70%\" height=\"70%\"\u003e\u003c/p\u003e\n\n**Handwritten Digit Classification Using Neural Networks** [`Code`](https://github.com/serengil/tensorflow-101/blob/master/python/HandwrittenDigitsClassification.py), [`Tutorial`](https://sefiks.com/2017/09/11/handwritten-digit-classification-with-tensorflow/)\n\nWe had to apply feature extraction on data sets to use neural networks. Deep learning enables to skip this step. We just feed the data, and deep neural networks can extract features on the data set. Here, we will feed handwritten digit data (MNIST) to deep neural networks, and expect to learn digits.\n\n\u003cp align=\"center\"\u003e\u003cimg src=\"https://i0.wp.com/sefiks.com/wp-content/uploads/2017/09/mnist-sample-output.png\" width=\"70%\" height=\"70%\"\u003e\u003c/p\u003e\n\n**Handwritten Digit Recognition Using Convolutional Neural Networks with Keras** [`Code`](https://github.com/serengil/tensorflow-101/blob/master/python/HandwrittenDigitRecognitionUsingCNNWithKeras.py), [`Tutorial`](https://sefiks.com/2017/11/05/handwritten-digit-recognition-using-cnn-with-keras/)\n\nConvolutional neural networks are close to human brain. People look for some patterns in classifying objects. For example, mouth, nose and ear shape of a cat is enough to classify a cat. We don't look at all pixels, just focus on some area. Herein, CNN applies some filters to detect these kind of shapes. They perform better than conventional neural networks. Herein, we got almost 2% accuracy than fully connected neural networks.\n\n**Automated Machine Learning and Auto-Keras for Image Data** [`Code`](https://github.com/serengil/tensorflow-101/blob/master/python/Auto-Keras.ipynb), [`Model`](https://github.com/serengil/tensorflow-101/blob/master/model/fer_keras_model_from_autokeras.json), [`Tutorial`](https://sefiks.com/2019/04/08/a-gentle-introduction-to-auto-keras/)\n\nAutoML concept aims to find the best network structure and hyper-parameters. Here, I've applied AutoML to facial expression recognition data set. My custom design got 57% accuracy whereas AutoML found a better model and got 66% accuracy. This means almost 10% improvement in the accuracy.\n\n\u003cp align=\"center\"\u003e\u003cimg src=\"https://sefiks.com/wp-content/uploads/2019/04/google-automl.jpg\" width=\"70%\" height=\"70%\"\u003e\u003c/p\u003e\n\n**Explaining Deep Learning Models with SHAP** [`Code`](https://github.com/serengil/tensorflow-101/blob/master/python/SHAP-Explainable-AI.ipynb), [`Tutorial`](https://sefiks.com/2019/07/01/how-shap-can-keep-you-from-black-box-ai/)\n\nSHAP explains black box machine learning models and makes them transparent, explainable and provable.\n\n\u003cp align=\"center\"\u003e\u003cimg src=\"https://sefiks.com/wp-content/uploads/2019/07/fer-for-shap.png\" width=\"70%\" height=\"70%\"\u003e\u003c/p\u003e\n\n**Gradient Vanishing Problem** [`Code`](python/gradient-vanishing.py) [`Tutorial`](https://sefiks.com/2018/05/31/an-overview-to-gradient-vanishing-problem/)\n\nWhy legacy activation functions such as sigmoid and tanh disappear on the pages of the history?\n\n\u003cp align=\"center\"\u003e\u003cimg src=\"https://sefiks.com/wp-content/uploads/2019/07/gradient-vanishing-problem-summary.jpg\" width=\"70%\" height=\"70%\"\u003e\u003c/p\u003e\n\n**How single layer perceptron works** [`Code`](python/single-layer-perceptron.py)\n\nThis is the 1957 model implementation of the perceptron.\n\n\u003cp align=\"center\"\u003e\u003cimg src=\"https://i1.wp.com/sefiks.com/wp-content/uploads/2018/05/perceptron.png\" width=\"50%\" height=\"50%\"\u003e\u003c/p\u003e\n\n**Face Alignment for Face Recognition** [`Code`](https://github.com/serengil/tensorflow-101/blob/master/python/face-alignment.py), [`Tutorial`](https://sefiks.com/2020/02/23/face-alignment-for-face-recognition-in-python-within-opencv/)\n\nGoogle declared that face alignment increase its face recognition model accuracy from 98.87% to 99.63%. This is almost 1% accuracy improvement which means a lot for engineering studies.\n\n\u003cp align=\"center\"\u003e\u003cimg src=\"http://sefiks.com/wp-content/uploads/2020/02/rotate-from-scratch.jpg\" width=\"70%\" height=\"70%\"\u003e\u003c/p\u003e\n\n# Requirements\n\nI have tested this repository on the following environments. To avoid environmental issues, confirm your environment is same as below.\n\n```\nC:\\\u003epython --version\nPython 3.6.4 :: Anaconda, Inc.\n\nC:\\\u003eactivate tensorflow\n\n(tensorflow) C:\\\u003epython\nPython 3.5.5 |Anaconda, Inc.| (default, Apr  7 2018, 04:52:34) [MSC v.1900 64 bit (AMD64)] on win32\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n\u003e\u003e\u003e import tensorflow as tf\n\u003e\u003e\u003e print(tf.__version__)\n1.9.0\n\u003e\u003e\u003e\n\u003e\u003e\u003e import keras\nUsing TensorFlow backend.\n\u003e\u003e\u003e print(keras.__version__)\n2.2.0\n\u003e\u003e\u003e\n\u003e\u003e\u003e import cv2\n\u003e\u003e\u003e print(cv2.__version__)\n3.4.4\n```\n\nTo get your environment up from zero, you can follow the instructions in the following videos.\n\n**Installing TensorFlow and Prerequisites** [`Video`](https://www.youtube.com/watch?v=JeR2M46tLlE)\n\n**Installing Keras** [`Video`](https://www.youtube.com/watch?v=qx5pivWvKC8)\n\n# Disclaimer\n\nThis repo might use some external sources. Notice that related tutorial links and comments in the code blocks cite references already.\n\n# Support\n\nThere are many ways to support a project - starring⭐️ the GitHub repos is one 🙏\n\nYou can also support this work on [Patreon](https://www.patreon.com/serengil?repo=tensorflow101), [GitHub Sponsors](https://github.com/sponsors/serengil) or [Buy Me a Coffee](https://buymeacoffee.com/serengil).\n\n\u003ca href=\"https://www.patreon.com/serengil?repo=tensorflow101\"\u003e\n\u003cimg src=\"https://raw.githubusercontent.com/serengil/tensorflow-101/master/icons/patreon.png\" width=\"30%\" height=\"30%\"\u003e\n\u003c/a\u003e\n\n\u003ca href=\"https://buymeacoffee.com/serengil\"\u003e\n\u003cimg src=\"https://raw.githubusercontent.com/serengil/tensorflow-101/master/icons/bmc-button.png\" width=\"25%\" height=\"25%\"\u003e\n\u003c/a\u003e\n\n# Citation\n\nPlease cite tensorflow-101 in your publications if it helps your research. Here is an example BibTeX entry:\n\n```BibTeX\n@misc{serengil2021tensorflow,\n  abstract     = {TensorFlow 101: Introduction to Deep Learning for Python Within TensorFlow},\n  author       = {Serengil, Sefik Ilkin},\n  title        = {tensorflow-101},\n  howpublished = {https://github.com/serengil/tensorflow-101},\n  year         = {2021}\n}\n```\n\n# Licence\n\nThis repository is licensed under MIT license - see [`LICENSE`](https://github.com/serengil/tensorflow-101/blob/master/LICENSE) for more details\n","funding_links":["https://github.com/sponsors/serengil","https://patreon.com/serengil","https://buymeacoffee.com/serengil","https://www.patreon.com/serengil?repo=tensorflow101"],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fserengil%2Ftensorflow-101","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fserengil%2Ftensorflow-101","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fserengil%2Ftensorflow-101/lists"}