{"id":19957772,"url":"https://github.com/hidayatarg/learn-train-model-tensorflow","last_synced_at":"2025-10-29T03:04:15.249Z","repository":{"id":38048927,"uuid":"153938160","full_name":"hidayatarg/Learn-Train-Model-Tensorflow","owner":"hidayatarg","description":"#AI Tensorflow, Machine Learning and Building a data model to recognize object detection with Keras back-end. This a research work. This library is designed for everyone to learn fast.","archived":false,"fork":false,"pushed_at":"2018-12-04T13:07:23.000Z","size":264,"stargazers_count":3,"open_issues_count":0,"forks_count":0,"subscribers_count":2,"default_branch":"master","last_synced_at":"2025-01-12T07:13:43.871Z","etag":null,"topics":["build","cifar10","deep-learning","keras-tensorflow","linear-regression","machine-learning","neural-networks","object-detection","python","ssd","tensorflow","tensorflow-examples","tensorflow-tutorials","tensors","train-model"],"latest_commit_sha":null,"homepage":"http://www.arghandabi.com/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/hidayatarg.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2018-10-20T18:56:13.000Z","updated_at":"2024-04-22T16:48:06.000Z","dependencies_parsed_at":"2022-09-07T08:02:14.462Z","dependency_job_id":null,"html_url":"https://github.com/hidayatarg/Learn-Train-Model-Tensorflow","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hidayatarg%2FLearn-Train-Model-Tensorflow","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hidayatarg%2FLearn-Train-Model-Tensorflow/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hidayatarg%2FLearn-Train-Model-Tensorflow/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hidayatarg%2FLearn-Train-Model-Tensorflow/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/hidayatarg","download_url":"https://codeload.github.com/hidayatarg/Learn-Train-Model-Tensorflow/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":241389166,"owners_count":19955107,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["build","cifar10","deep-learning","keras-tensorflow","linear-regression","machine-learning","neural-networks","object-detection","python","ssd","tensorflow","tensorflow-examples","tensorflow-tutorials","tensors","train-model"],"created_at":"2024-11-13T01:38:53.422Z","updated_at":"2025-10-29T03:04:15.183Z","avatar_url":"https://github.com/hidayatarg.png","language":"Python","readme":"# Build and train a data model to recognize objects in images\n\n## Install Python 3.6.0\nInstall python pay attention to check `add-to Path variable` on the installion screen. Select the advance settings and install it for all users. It will install python under the ` C:\\program files\\python`.\n\n## Install Pycharm \nPycharm community edition is a good IDE to start the projects. It has good intelisense.\nIt will be great if you take a small tutorial on how to you pycharm.\n\n\n## Install Tensorflow \n\n - TensorFlow is an open-source machine learning library for research\n   and production.  You can visit https://www.tensorflow.org/tutorials/\n   for more details. \n - Tensors are basic units of data and are arrays of\n   primitive values.\n - Programs have two sections: building graphs with nodes, and running the graphs.\n - Building:\n  \t - Build a graph of nodes \n\t - Specify model type, input parameters, expected output and parameters we want model to optimize\n\t - Might also want some training data and some testing data\n - Running:\n\t - Run the session on our graphs to output any results\n\t - Specify how many times we want our model to run\n\t - First train with training data then use testing data to asses accuracy \n- One of the simplest models is linear regression. We basically estimate a  line that fits some data and run our mode until the line adjust to be fit the data.\n### Constant and Operation nodes\nwe will explore all the different kinds nodes that we are going to build our linear regression model. In this section we will explain constant and operation nodes. There are the most simple types, They hold some values or they result some operations\n```Python \nimport tensorflow as tf\n\n#datatype of float 34 will be used to access decimals\nconst_node_1=tf.constant(1.0, dtype=tf.float32)\nconst_node_1=tf.constant(2.0)\n\nprint(const_node_1)\nprint(const_node_2)\n```\nthe output will be\n```Python\nTensor(\"Const:0\", shape=(), dtype=float32)\nTensor(\"Const_1:0\", shape=(), dtype=float32)\n```\nto print these values we need to run sessions\n```Python\nimport tensorflow as tf\n\nconst_node_1=tf.constant(1.0, dtype=tf.float32)\nconst_node_1=tf.constant(2.0)\n\nprint(const_node_1)\nprint(const_node_2)\n\nsession=tf.Session()  \nprint(session.run([const_node_1,const_node_2]))\n```\noutput:\n```Python\n[1.0, 2.0]\n```\n\nWe add one more node\n```Python\n\nimport tensorflow as tf   \nconst_node_1=tf.constant(1.0, dtype=tf.float32)  \nconst_node_2=tf.constant(2.0, dtype=tf.float32)  \nconst_node_3=tf.constant([3.0, 4.0, 5.0], dtype=tf.float32)  \n  \nsession=tf.Session()  \nprint(session.run(const_node_1))  \nprint(session.run(const_node_2))  \nprint(session.run(const_node_3))\n```\n\n#### Addition of nodes\n```python\nimport tensorflow as tf  \nconst_node_1=tf.constant(1.0, dtype=tf.float32)  \nconst_node_2=tf.constant(2.0, dtype=tf.float32)  \nconst_node_3=tf.constant([3.0, 4.0, 5.0], dtype=tf.float32)  \n  \nadd_node_1=tf.add(const_node_1,const_node_2)  \nadd_node_2=const_node_1+const_node_2  \n  \nsession=tf.Session()  \nprint(session.run(add_node_1))  \nprint(session.run(add_node_2))\n```\n\n#### Multiplication of nodes\n```python\nimport tensorflow as tf  \n  \nconst_node_1=tf.constant(1.0, dtype=tf.float32)  \nconst_node_2=tf.constant(2.0, dtype=tf.float32)  \nconst_node_3=tf.constant([3.0, 4.0, 5.0], dtype=tf.float32)  \n  \nadd_node_1=tf.add(const_node_1,const_node_2)  \nadd_node_2=const_node_1+const_node_2  \n \nMulti_node_v1= const_node_2*const_node_1    \nsession=tf.Session()  \nprint(session.run(add_node_1))  \nprint(session.run(add_node_2))  \nprint(session.run(Multi_node_v1))\n```\n### Place Holder Nodes\nPlace holder nodes contains nodes with no current values when we create them. We pass some value when running session. You can think of these nodes as taking input. for example our linear regression model\n`y=mx+b` when `m` and \t`b` are the variable nodes. `x` is place holder node because we will pass the value to it when we run the program. Similarly while test we may pass a value to `y` which is also a place holder node.\nExample:\n```python\nimport tensorflow as tf  \n  \n# data type to the float 32  \nplaceholer_1 = tf.placeholder(dtype=tf.float32)  \nplaceholer_2 = tf.placeholder(dtype=tf.float32)  \n  \nsession = tf.Session()  \n  \n# Here we should provide pass values to the tensor (array)  \n# We need to pass the node first then the value to that node  \n# print(session.run(placeholer_1: 5.0)) provide a value or a tensor  \nprint(session.run(placeholer_1, {placeholer_1: [1.0, 2.0]}))\n```\n####  Multiplication of place holder nodes\n```python \nimport tensorflow as tf  \n  \n# data type to the float 32  \nplaceholer_1 = tf.placeholder(dtype=tf.float32)  \nplaceholer_2 = tf.placeholder(dtype=tf.float32)  \n  \n  \nmultiply_node_1 = placeholer_1 * 3  \nmultiply_node_2 = placeholer_1 * placeholer_2  \n  \nsession = tf.Session()  \n  \n# We need to provide values for both place holder inorder to multiply them\n# If we use single node we do not need to pass the other node a value \n  \nprint(session.run(multiply_node_1, {placeholer_1: [1.0, 2.0]}))  \n\n# we provide a tensor value to the place holder two\nprint(session.run(multiply_node_2, {placeholer_1: 4.0, placeholer_2: [2.0, 5.0]}))\n```\nOutput:\n```python\n[3. 6.]\n[8. 20.]\n```\n### Variable Nodes\nVariable node store initial value but can change. We must call an initializer to assign the value. Where with the constant nodes we can not assign any value once the value is stored.\nVariable nodes have tons of extra functionalities. It means there are extra functions we can call on the variable nodes.\n```python\nimport tensorflow as tf  \n  \nvariable_node_1 = tf.Variable([5.0], dtype=tf.float32)    \n# For demonstration  \nconst_node_1 = tf.constant([10.0], dtype=tf.float32)  \n  \nsession = tf.Session()  \nprint(session.run(const_node_1))\n```\nOutput:\n`[10.]`\n\u003e ***!Alert:*** If we replace the const node by the variable node we will get errors\n\nWe may think the we assigned a tensor to the variable node, but we didn't assigned it. So `variable_node_1` doesn't hold that tensor (value).  In order to solve all those errors we need to create a `Global Initializer`\n```python\nimport tensorflow as tf  \n  \nvariable_node_1 = tf.Variable([5.0], dtype=tf.float32)  \n# For demonstration  \nconst_node_1 = tf.constant([10.0], dtype=tf.float32)  \n  \nsession = tf.Session()  \n  \n# Create an initializer  \ninit = tf.global_variables_initializer()  \n# Run the initializer  \nsession.run(init)    \nprint(session.run(variable_node_1))\n```\nOutput:\n`[5.]`\n\u003e ***!Alert***: Calling once the `Global Initializer` is enough if you use more than one `variable nodes`.\n\nYou can also call `global initializer` inside the `session.run()`.\n`session.run(tf.global_variables_initializer())`\nYou can also multiply it with constant node inside the session.run\n```python\nimport tensorflow as tf  \n  \nvariable_node_1 = tf.Variable([5.0], dtype=tf.float32)    \n# For demonstration  \nconst_node_1 = tf.constant([10.0], dtype=tf.float32)  \n  \nsession = tf.Session()  \n  \n# Create initializer and Run the initializer  \nsession.run(tf.global_variables_initializer())  \nprint(session.run(variable_node_1*const_node_1))\n```\n  Output:\n  `[50.]`\n  \u003e tf.Variable is a class and tf.constant is not a class.\n##### Assign a value to the variable nodes.\nOnly running the `variable_node_1.assign([10.0])` will not assign the variable node to the tensor parameter.  We need to create and run the initializer.  \n```python\n# assign a value or tensor to the variable node\nsession.run(variable_node_1.assign([10.0]))  \nprint(session.run(variable_node_1))\n```\nThe script will be \n```python\nimport tensorflow as tf  \n  \nvariable_node_1 = tf.Variable([5.0], dtype=tf.float32)  \nconst_node_1 = tf.constant([10.0], dtype=tf.float32)  \n  \nsession = tf.Session()  \ninit = tf.global_variables_initializer()  \n# Create initializer  \nsession.run(init)  \nprint(session.run(variable_node_1*const_node_1))  \n  \nsession.run(variable_node_1.assign([10.0]))  \nprint(session.run(variable_node_1))\n```\nOutput:\n```python\n[50.]\n[10.]\n```\n## Linear Regression\n\n - Linear regression is one of the simplest machine learning model\n   \t\t\t\t\t\t`y=mx+b` \n  - It will give the y value based on this function. \n  - This a very good prediction that where the line (through some data points to help) should lie.\n  - Program optimizes line by adjusting  `m` and `b` until it minimizes loss\n\t  -\tLoses is the differences between an actual y value and the line itself\n\t  -\tminimal loss corresponds with a line that best fits the data as points are on average closer to actual.\n- Training our model:\n\t- Take x value and correspond y values as inputs.\n\t- Start with guess for `m` and `b` and measure loss.\n\t- Run program to adjust `m` and `b` to minimize loss based on inputs.\n- Final model will fit a good line through data and will be able to  predict correctly `y` value for given`x` input.\n***In summary***, We will build the graph then we will move to the training and finally we will use some test data to asses the accuracy of our model.\n\n### Building Linear Regression Model\n\n```python\nimport tensorflow as tf\n\n# y = Wx + b\n\n# x will be place holder\n# m and b will change by model\n# W some weight for m\n\n# Create some X values\n# Create some Y values\n\n# x = [1, 2, 3,4]\n# y = [0, -1, -2, -3]\n\n# these points are smaller not far from 1\nW = tf.Variable([-.5], dtype=tf.float32)\nb = tf.Variable([.5], dtype=tf.float32)\n\n# We see the number how accurate\n# If our guess is far from the real answer the data will take time to train\n# If our guess is near to the real data it will take less time to train\n\nx = tf.placeholder(dtype=tf.float32)\ny = tf.placeholder(dtype=tf.float32)\n\nlinear_model = W * x + b\n\n# Train\n\nx_train = [1, 2, 3, 4]\ny_train = [0, -1, -2, -3]\n\nsession = tf.Session()\n# Set the global initializer for variable nodes\ninit = tf.global_variables_initializer()\nsession.run(init)\n\n# Run our linear mode and pass values\nprint(session.run(linear_model, {x: x_train }))\n```\nOutput:\n\n`[ 0.  -0.5 -1.  -1.5]`\n\nSo when we check out values with y_train (The values we expect) actually we are not very far.\n\nThe **_loss_** is consider the difference between the output and the y_train.\nOur model is minimized by adjusting the W and b values. By adjusting the slope `W` and our `y` intercept.\nSo basically our aims to closer the output values to the `y_train` values.\nWe will create our loss model, as `loss variable` below the `linear_model`\n```python\nloss = tf.reduce_sum(tf.square(linear_model - y))\n```\nHere we will take the square of each point and take absolute to find the difference.\n\n```python\nimport tensorflow as tf\n\n# y = Wx + b\n\n# x will be place holder\n# m and b will change by model\n# W some weight for m\n\n# Create some X values\n# Create some Y values\n\n# x = [1, 2, 3,4]\n# y = [0, -1, -2, -3]\n\n# these points are smaller not far from 1\nW = tf.Variable([-.5], dtype=tf.float32)\nb = tf.Variable([.5], dtype=tf.float32)\n\n# We see the number how accurate\n# If our guess is far from the real answer the data will take time to train\n# If our guess is near to the real data it will take less time to train\n\nx = tf.placeholder(dtype=tf.float32)\ny = tf.placeholder(dtype=tf.float32)\n\nlinear_model = W * x + b\n\nloss = tf.reduce_sum(tf.square(linear_model - y))\n\n# Train\n\nx_train = [1, 2, 3, 4]\n#         [ 0.  -0.5 -1.  -1.5] - The values we received\ny_train = [0, -1, -2, -3]\n\n\nsession = tf.Session()\n# Set the global initializer for variable nodes\ninit = tf.global_variables_initializer()\nsession.run(init)\n\n# Run our linear mode and pass values\n# print(session.run(linear_model, {x: x_train}))\nprint(session.run(loss, {x: x_train, y:y_train}))\n```\nOutput:\n```python\n3.5\n```\nwe get a loss of 3.5 that a very resonable since we started form -0.5\n\n\u003e !Alert if we give w=.5 and b=-.5 then we would have a loss of 31.5\nThis is why a perdition is important\n\n\n\n\n### Building Linear Regression\nPreviously we worked on building competition graph. In this part we will train our data, optimize the value of \nw and b, and minimizing the loss value. Our loss was `3.5`. We will try to make this close to zero.\n\nWe train our model to minimize the loss. We will adjust the loss that will adjust w and b (tensor variables).\nAfter having the suitable values for w and b our loss will be adjusted close to zero possibly. After that we will feed the x values and get the y values. \nWe will use tensorflow core functions, Gradient Descent and optimize function.\n\nWe create an optimizer.\n\n`optimzer = tf.train.GradientDescentOptimizer(0.01)`\n\n0.01 is the learning rate. It is how the -.5 and .5 will be modified each time. If make this value very low then it will learn \nvery slow and consume timing. In case, we set the learning rate very high we won't get the accurate model, It will be adjusting the value with hight values it will be hard ping the correct result.\n\n```python\nloss = tf.reduce_sum(tf.square(linear_model - y))\noptimizer = tf.train.GradientDescentOptimizer(0.01)\ntrain = optimizer.minimize(loss)\n```\n\nIt will use the learning rate of `0.01` and it will adjust in way to minimize the `loss` function\n\u003e **_loss_** is the difference between the current value of  w and b, and the expected values. Simply it is the **difference between the `y`** but it is highly dependent on `w` and `b`.\n\n\u003e**_Remember_** x and y are placeholder, w and b are tensor variable only place that can be changed (adjusted).\n\nWe set the number of time we want to train our model that is called epox.\n\n\u003e If we run for a long time it will take time to train the data. and if don't run it for enough time, It won't be able to learn.\n\n```python\n# Loop\n# Run the train variable\nfor i in range (1000):\n    session.run(train, {x: x_train, y:y_train})\nprint(session.run([W,b]))\n```\n\nAll:\n```python\nimport tensorflow as tf\n\n# y = Wx + b\n\n# x will be place holder\n# m and b will change by model\n# W some weight for m\n\n# Create some X values\n# Create some Y values\n\n# x = [1, 2, 3,4]\n# y = [0, -1, -2, -3]\n\n# these points are smaller not far from 1\nW = tf.Variable([-.5], dtype=tf.float32)\nb = tf.Variable([.5], dtype=tf.float32)\n\n# We see the number how accurate\n# If our guess is far from the real answer the data will take time to train\n# If our guess is near to the real data it will take less time to train\n\nx = tf.placeholder(dtype=tf.float32)\ny = tf.placeholder(dtype=tf.float32)\n\nlinear_model = W * x + b\n\nloss = tf.reduce_sum(tf.square(linear_model - y))\n\n# Train\n\nx_train = [1, 2, 3, 4]\n#         [ 0.  -0.5 -1.  -1.5] - The values we received\ny_train = [0, -1, -2, -3]\n\n\nsession = tf.Session()\n# Set the global initializer for variable nodes\ninit = tf.global_variables_initializer()\nsession.run(init)\nprint(session.run(loss, {x: x_train, y:y_train}))\n```\n\nOutput:\n```python\n[array([-0.9999988], dtype=float32), array([0.9999964], dtype=float32)]\n```\nw is very close to -1 and b close to +1.\n\nNow we check the loss:\n```python\nimport tensorflow as tf\n\n# y = Wx + b\n\n# x will be place holder\n# m and b will change by model\n# W some weight for m\n\n\nW = tf.Variable([-.5], dtype=tf.float32)\nb = tf.Variable([.5], dtype=tf.float32)\n\n\nx = tf.placeholder(dtype=tf.float32)\ny = tf.placeholder(dtype=tf.float32)\n\n\nlinear_model = W * x + b\n\nloss = tf.reduce_sum(tf.square(linear_model - y))\noptimizer = tf.train.GradientDescentOptimizer(0.01)\ntrain = optimizer.minimize(loss)\n\n# Train\n\nx_train = [1, 2, 3, 4]\n#         [ 0.  -0.5 -1.  -1.5] - The values we received\ny_train = [0, -1, -2, -3]\n\n\nsession = tf.Session()\n# Set the global initializer for variable nodes\ninit = tf.global_variables_initializer()\nsession.run(init)\n\n# Loop\n# Run the train variable\nfor i in range (1000):\n    session.run(train, {x: x_train, y:y_train})\nnew_W, new_b, new_loss = session.run([W, b, loss], {x:x_train, y: y_train})\nprint(\"New W: %s\"%new_W)\nprint(\"New b: %s\"%new_b)\nprint(\"New loss: %s\"%new_loss)\n```\n\nOutput:\n```python\nNew W: [-0.9999988]\nNew b: [0.9999964]\nNew loss: 8.526513e-12\n```\n`new w` is close to `-1`, `new b` is close `+1` and `new loss` is very close to zero.\n\nLet's run our model with new data (linear_model). w and b are already optimized and will run with x values.\n```python\n# send an array of x\nprint(session.run(linear_model, {x: [10, 20, 30, 40]}))\n```\nScript:\n```python\nimport tensorflow as tf\n\n# y = Wx + b\n\n# x will be place holder\n# m and b will change by model\n# W some weight for m\n\n\nW = tf.Variable([-.5], dtype=tf.float32)\nb = tf.Variable([.5], dtype=tf.float32)\n\n\nx = tf.placeholder(dtype=tf.float32)\ny = tf.placeholder(dtype=tf.float32)\n\nlinear_model = W * x + b\n\nloss = tf.reduce_sum(tf.square(linear_model - y))\noptimizer = tf.train.GradientDescentOptimizer(0.01)\ntrain = optimizer.minimize(loss)\n\n# Train\nx_train = [1, 2, 3, 4]\n#         [ 0.  -0.5 -1.  -1.5] - The values we received\ny_train = [0, -1, -2, -3]\n\n\nsession = tf.Session()\n# Set the global initializer for variable nodes\ninit = tf.global_variables_initializer()\nsession.run(init)\n\n# Run our linear mode and pass values\n# print(session.run(linear_model, {x: x_train}))\n# print(session.run(loss, {x: x_train, y:y_train}))\n\n# Loop\nfor i in range (1000):\n    session.run(train, {x: x_train, y: y_train})\nnew_W, new_b, new_loss = session.run([W, b, loss], {x: x_train, y: y_train})\n# send an array of x\nprint(session.run(linear_model, {x: [10, 20, 30, 40]}))\n```\nOutput:\n```python\n[ -8.999992 -18.99998  -28.999968 -38.999958]\n```\nIt trained our Linear Regression Model. \n\n### Import CIFAR Packages\nWe need to download the cifar from an external source to use it in this project. You can download the CIFAR-10 dataset from https://www.cs.toronto.edu/~kriz/cifar.html. This is provided as open-source.\n\n\u003e The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images \nThe dataset is divided into five training batches and one test batch, each with 10000 images. The test batch contains exactly 1000 randomly-selected images from each class. The training batches contain the remaining images in random order, but some training batches may contain more images from one class than another. Between them, the training batches contain exactly 5000 images from each class.\n\u003e (CIFAR-10 URL: https://www.cs.toronto.edu/~kriz/cifar.html).\n\nHere we have the basic 10 categories such as airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck.\n\nIf we want to make this complex we can work with CIFAR-100 dataset, which support hundred categories. CIFAR-100 consumes much time than CIFAR-10. In this project we want to train fast so will use CIFAR-10.\n\nScroll down to the download part and download the python version. Download Link : https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz\nInstall Keras by using pip command in terminal or command prompt `pip install Keras`\nwe need to put the cifar-10 in this location `C:\\Program Files\\Python36\\Lib\\site-packages\\keras\\datasets` and unzip it .\n\n![enter image description here](https://ibb.co/HtBr73q)\n\nNow we will modify the `cifar-10.py` and `cifar.py` with pycharm.\nThe cifar.py script load the model int the cifar-10 dataset. We are not going to change cifar.py, but we need to modify the cifar-10.py.\n\n - Line 20: `dirname = 'cifar-10-batches-py'`  add also `dirname = 'C:\\Program Files\\Python36\\Lib\\site-packages\\keras\\datasets\\cifar-10-batches-py'`\n the project directory where it is located.\n\nCreate the python Script file called `Image-Recognition-Trainer.py`\n\n\n## Displaying Images with PIL\nWe are going to import images, and manipulate them. We are using a python image library called PIL, to import images, open images and display them.\n\nWe will import the:\n`from PIL import Image` Incase you get any error this packages might not be installed. You can install it by going into project interpreter an click on the `+` add button search for the `PIL` library and install it. \n\nIn the \n\n```python\n# Import the library\nfrom PIL import Image  \n\n# Image Location (changing upon)\ncat_image_pathname = 'C:/Users/hp/Desktop/AIPROJECTS/Learn-Train-Model-Tensorflow/Image Recognition (CIFAR-10 Project)/Image/cat1.jpg'  \n\n# Image\ncat_image = Image.open(cat_image_pathname)  \n\n#Displaying Image  \ncat_image.show()\n```\n\nThis script will display the image. \nIn the following script we take the image from the user\n```python\nfrom PIL import Image \n\n# Image Location recieved from user\ndisplay_image_pathname = input('Enter image pathname: ')  \n\n# Image\ndisplay_image = Image.open(display_image_pathname)  \n\n#Displaying Image  \ndisplay_image.show()\n```","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fhidayatarg%2Flearn-train-model-tensorflow","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fhidayatarg%2Flearn-train-model-tensorflow","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fhidayatarg%2Flearn-train-model-tensorflow/lists"}