{"id":21130401,"url":"https://github.com/serpapi/automatic-images-classifier-generator","last_synced_at":"2025-07-09T01:33:28.922Z","repository":{"id":58251349,"uuid":"528967844","full_name":"serpapi/automatic-images-classifier-generator","owner":"serpapi","description":"Generate machine learning models fully automatically to clasiffiy any images using SERP data","archived":false,"fork":false,"pushed_at":"2022-08-25T22:29:31.000Z","size":77,"stargazers_count":12,"open_issues_count":0,"forks_count":0,"subscribers_count":4,"default_branch":"master","last_synced_at":"2025-06-30T09:15:15.987Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/serpapi.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2022-08-25T18:13:54.000Z","updated_at":"2024-12-22T14:16:25.000Z","dependencies_parsed_at":"2022-08-31T01:02:04.139Z","dependency_job_id":null,"html_url":"https://github.com/serpapi/automatic-images-classifier-generator","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/serpapi/automatic-images-classifier-generator","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/serpapi%2Fautomatic-images-classifier-generator","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/serpapi%2Fautomatic-images-classifier-generator/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/serpapi%2Fautomatic-images-classifier-generator/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/serpapi%2Fautomatic-images-classifier-generator/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/serpapi","download_url":"https://codeload.github.com/serpapi/automatic-images-classifier-generator/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/serpapi%2Fautomatic-images-classifier-generator/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":264375567,"owners_count":23598402,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-11-20T05:33:00.670Z","updated_at":"2025-07-09T01:33:28.579Z","avatar_url":"https://github.com/serpapi.png","language":"Python","readme":"# Generate machine learning models fully automatically to classify any images using SERP data\n\n`automatic-images-classifier-generator` is a machine learning tool written in Python using [SerpApi](https://serpapi.com), Pytorch, FastAPI, and Couchbase to provide automated large dataset creation, automated training and testing of deep learning models with the ability to tweak algorithms, storing the structure and results of neural networks all in one place.\n\nDisclaimer: This open-source machine learning software is not one of [the product offerings provided by SerpApi](https://serpapi.com/libraries). The software is using one of the product offerings, [SerpApi’s Google Images Scraper API](https://https://serpapi.com/images-results) to automatically create datasets. You may [register to SerpApi to claim free credits](https://serpapi.com/users/sign_up). You may also see the pricing page of SerpApi to get detailed information.\n\n- [Machine Learning Tools and Features provided by `automatic-images-classifier-generator`](#machine-learning-tools-and-features-provided-by-automatic-images-classifier-generator)\n- [Installation](#installation)\n- [Basic Usage of Machine Learning Tools](#basic-usage-of-machine-learning-tools)\n- [Adding SERP Images to Storage Server](#adding-serp-images-to-storage-server)\n  * [add_to_db](#add_to_db)\n  * [multiple_query](#multiple_query)\n    + [Example Dictionary](#example-dictionary)\n- [Training a Model](#training-a-model)\n  * [train](#train)\n    + [Example Dictionary](#example-dictionary-1)\n- [Testing a Model](#testing-a-model)\n  * [test](#test)\n    + [Example Dictionary](#example-dictionary-2)\n- [Getting Information on the Training and Testing of the Model](#getting-information-on-the-training-and-testing-of-the-model)\n  * [find_attempt](#find_attempt)\n    + [Example Output Dictionary](#example-output-dictionary)\n- [Support for Various Elements](#support-for-various-elements)\n  * [Layers](#layers)\n  * [Optimizers](#optimizers)\n  * [Loss Functions](#loss-functions)\n  * [Transforms](#transforms)\n  * [Image Operations](#image-operations)\n- [Keypoints for the State of the Machine Learning Tool and Its Future Roadmap](#keypoints-for-the-state-of-the-machine-learning-tool-and-its-future-roadmap)\n\n---\n\n# Machine Learning Tools and Features provided by `automatic-images-classifier-generator`\n\n- Machine Learning Tools for automatic large image datasets creation powered by [SerpApi’s Google Images Scraper API](https://serpapi.com/users/sign_up)\n\n- Machine Learning Tools for automatically training deep learning models with customized tweaks for various algorithms\n\n- Machine Learning Tools for automatically testing machine learning models\n\n- Machine Learning Tools for customizing nodes within pipelines of ml models, changing dimensionality of machine learning algorithms, etc.\n\n- Machine Learning Tools for keeping the record of the training losses, employed datasets, structures of neural networks, and accuracy reports\n\n- Async Training and Testing of Machine Learning Models\n\n- Delivery of data necessary to create a visualization for cross-comparing different machine learning models with subtle changes in their neural network structure.\n\n- Various shortcuts for preprocessing with targeted data mining of SERP data\n\n---\n\n# Installation\n\n1) Clone the repository\n```\ngh repo clone serpapi/automatic-images-classifier-generator\n```\n\n2) [Open a SerpApi Account (Free Credits Available upon Registration)](https://serpapi.com/users/sign_up)\n\n3) [Download and Install Couchbase](https://www.couchbase.com/downloads)\n\n4) Head to Server Dashboard URL (Ex: `http://kagermanov:8091`), and create a bucket named `images`\n\n![image](https://user-images.githubusercontent.com/73674035/186512765-048da222-c86b-4304-8456-5ae9bd6a8c8a.png)\n\n5) Install required Python Libraries\n```\npip install -r requirements.txt\n```\n\n6) Fill `credentials.py` file with your server credentials, and [SerpApi credentials](https://serpapi.com/manage-api-key)\n\n7) Run Setup Server File\n```\npython setup_server.py\n```\n\n8) Run the FastAPI Server\n```\nuvicorn main:app --host 0.0.0.0 --port 8000\n```\nor you may simply use a debugging server by clicking on `main.py` and running a degugging server:\n\n![debug](https://user-images.githubusercontent.com/73674035/186514308-c8760bcd-3467-4255-893f-b327e357fb03.png)\n\n9) Optionally run the tests:\n```\npytest test_main.py\n```\n\n---\n\n# Basic Usage of Machine Learning Tools\n1) Head to `localhost:8000/docs`\n2) Use `add_to_db` endpoint to call to update image database\n3) Use `train` endpoint to train a model. The trained model will be saved on `models` folder when the training is complete. The training is an async process. Keep an eye out for terminal outputs for the progression.\n4) Use `test` endpoint to test a model.\n5) Use `find_attempt` endpoint to fetch the data on the training and testing process (losses at each epoch, accuracy etc.)\n\n---\n\n# Adding SERP Images to Storage Server\n## `add_to_db`\n\nUser can make singular searches with [SerpApi Images Scraper API](https://serpapi.com/images-results), and automatically add them to local image storage server.\n\n\u003cdetails\u003e\n\u003csummary\u003eVisual Documentation Playground\u003c/summary\u003e\n\nHead to `http://localhost:8000/docs#/default/create_query_add_to_db__post` and customize the dictionary:\n![add_to_db](https://user-images.githubusercontent.com/73674035/186532744-2b1258ca-97c7-40d4-aaeb-11cf4e6a7510.png)\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003eCurl Command with Explanation of Parameters\u003c/summary\u003e\n\n```\ncurl -X 'POST' \\\n  'http://localhost:8000/add_to_db/' \\\n  -H 'accept: application/json' \\\n  -H 'Content-Type: application/json' \\\n  -d '{\n  \"google_domain\": \"\u003cSerpApi Parameter: Google Domain to be scraped\u003e\",\n  \"limit\": \u003cExternal Parameter: Integer, Limit of Images to be downloaded\u003e,\n  \"ijn\": \"\u003cSerpApi Parameter: Page Number\u003e\",\n  \"q\": \"\u003cSerpApi Parameter: Query to be searched for images\u003e\",\n  \"chips\": \"\u003cSerpApi Parameter: chips parameter that specifies the image search\u003e\",\n  \"desired_chips_name\": \"\u003cExternal Parameter: Specification Name for chips parameter\u003e\",\n  \"api_key\": \"\u003cSerpApi Parameter: API Key\u003e\",\n  \"no_cache\": \u003cSerpApi Parameter: Choice for Cached or Live Results\u003e\n}'\n```\n\u003c/details\u003e\n\n### Example Dictionary\n\n```py\n{\n  \"google_domain\": \"google.com\",\n  \"limit\": 100,\n  \"ijn\": 0,\n  \"chips\": \"\",\n  \"desired_chips_name\": \"phone\",\n  \"api_key\": \"\u003capi_key\u003e\",\n  \"no_cache\": True\n}\n```\n\n## `multiple_query`\n\nUser can make multiple searches with SerpApi Images Scraper API, and automatically add them to local image storage server.\n\n\u003cdetails\u003e\n\u003csummary\u003eVisual Documentation Playground\u003c/summary\u003e\n\nHead to `http://localhost:8000/docs#/default/create_multiple_query_multiple_query__post` and customize the dictionary:\n![multiple_query](https://user-images.githubusercontent.com/73674035/186536965-353a759b-6660-46ee-b327-ccd84775017f.png)\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003eCurl Command with Explanation of Parameters\u003c/summary\u003e\n\n```\ncurl -X 'POST' \\\n  'http://localhost:8000/multiple_query/' \\\n  -H 'accept: application/json' \\\n  -H 'Content-Type: application/json' \\\n  -d '{\n  \"queries\": [\n    \"\u003cSerpApi Parameter: Query to be searched for images\u003e\"\n    \"\u003cSerpApi Parameter: Query to be searched for images\u003e\"\n    ...\n  ],\n  \"desired_chips_name\": \"\u003cExternal Parameter: Specification Name for chips parameter\u003e\",\n  \"height\": \u003cExternal Parameter: Integer, Height of desired images\u003e,\n  \"width\": \u003cExternal Parameter: Integer, Width of desired images\u003e,\n  \"number_of_pages\": \u003cExternal Parameter: Total Number of pages to be scraped for each query\u003e,\n  \"google_domain\": \"\u003cSerpApi Parameter: Google Domain to be scraped\u003e\",\n  \"api_key\": \"\u003cSerpApi Parameter: API Key\u003e\",\n  \"limit\": \u003cExternal Parameter: Integer, Limit of Images to be downloaded per each query on each page\u003e,\n  \"no_cache\": \u003cSerpApi Parameter: Choice for Cached or Live Results\u003e\n}'\n```\n\n\u003c/details\u003e\n\n### Example Dictionary\n```py\n{\n  \"queries\": [\n    \"american foxhound\",\n    \"german shephard\",\n    \"caucasian shepherd\"\n  ],\n  \"desired_chips_name\": \"dog\",\n  \"height\": 500,\n  \"width\": 500,\n  \"number_of_pages\": 2,\n  \"google_domain\": \"google.com\",\n  \"limit\": 100,\n  \"api_key\": \"\u003capi_key\u003e\",\n  \"no_cache\": True\n}\n```\n\n# Training a Model\nUser can train a model with a customized dictionary from `train` endpoint.\n\n## `train`\n\n\u003cdetails\u003e\n\u003csummary\u003eVisual Documentation Playground\u003c/summary\u003e\n\nHead to `http://localhost:8000/docs#/default/train_train__post` and customize the dictionary:\n![train](https://user-images.githubusercontent.com/73674035/186538215-01a15163-8775-4cdc-a760-0257f7b89507.png)\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003eCurl Command with Explanation of Parameters\u003c/summary\u003e\n\n```\ncurl -X 'POST' \\\n  'http://localhost:8000/train/' \\\n  -H 'accept: application/json' \\\n  -H 'Content-Type: application/json' \\\n  -d '{\n  \"model_name\": \"\u003c Name the user want to the model, will be saved in models/ folder with the same name\u003e\",\n  \"criterion\": {\n    \"name\": \"\u003cLoss Function\u003e\"\n    \"\u003cParameter of the Loss Function\u003e\": \"\u003cValue for the Parameter of a Loss Function\u003e\"\n    ...\n  },\n  \"optimizer\": {\n    \"name\": \"\u003cOptimizer Function\u003e\"\n    \"\u003cParameter of the Optimizer\u003e\": \"\u003cValue for the Parameter of an Optimizer\u003e\"\n    ...\n  },\n  \"batch_size\": \u003cHow many images will be fetched at each epoch of a training\u003e,\n  \"n_epoch\": \u003cNumber of epochs\u003e,\n  \"n_labels\": 0, ### Keep it like that, it will be automatically updated in automatic training process\n  \"image_ops\": [\n    {\n      \"\u003cName of the function in PIL Image Class\u003e\": {\n        \"\u003cParameter of the function in PIL Image Class\u003e\": \u003cValue for Parameter of the function in PIL Image Class\u003e,\n        ...\n      }\n    },\n    ...\n  ],\n  \"transform\": {\n    \"\u003cPytorch Transforms Layer Name without parameters\u003e\": true,\n    \"\u003cPytorch Transforms Layer Name with parameters\u003e\": {\n      \"\u003cPytorch Transforms Layer Parameter\u003e\": \u003cValue for Pytorch Transforms Layer Parameter\u003e\n      ...\n    }\n  },\n  \"target_transform\": {\n    \"\u003cPytorch Transforms Layer Name without parameters for target of the operation(e.g. classification)\u003e\": true\n    \"\u003cPytorch Transforms Layer Name with parameters for target of the operation(e.g. classification)\u003e\": {\n      \"\u003cPytorch Transforms Layer Parameter for target of the operation(e.g. classification)\u003e\": \u003cValue for Pytorch Transforms Layer Parameter for target of the operation(e.g. classification)\u003e\n      ...\n    }\n  },\n  \"label_names\": [\n    \"\u003cLabel Name used in classification, same with the query used in adding it to database\u003e\"\n    ...\n  ],\n  \"model\": {\n    \"name\": \"\u003cClass Name of the preset model in models.py\u003e\",\n    \"layers\": [\n      {\n        \"name\": \"\u003cPytorch Training Layer\u003e\",\n        \"\u003cParameter in Pytorch Training Layer\u003e\": \u003cParameter in Pytorch Training Layer\u003e\n        ...\n      },\n      ...\n    ]\n  }\n}'\n```\n\n\u003c/details\u003e\n\n### Example Dictionary\n```py\n{\n  \"model_name\": \"american_dog_species\",\n  \"criterion\": {\n    \"name\": \"CrossEntropyLoss\"\n  },\n  \"optimizer\": {\n    \"name\": \"SGD\",\n    \"lr\": 0.001,\n    \"momentum\": 0.9\n  },\n  \"batch_size\": 4,\n  \"n_epoch\": 100,\n  \"n_labels\": 0,\n  \"image_ops\": [\n    {\n      \"resize\": {\n        \"size\": [\n          500,\n          500\n        ],\n        \"resample\": \"Image.ANTIALIAS\"\n      }\n    },\n    {\n      \"convert\": {\n        \"mode\": \"'RGB'\"\n      }\n    }\n  ],\n  \"transform\": {\n    \"ToTensor\": True,\n    \"Normalize\": {\n      \"mean\": [\n        0.5,\n        0.5,\n        0.5\n      ],\n      \"std\": [\n        0.5,\n        0.5,\n        0.5\n      ]\n    }\n  },\n  \"target_transform\": {\n    \"ToTensor\": True\n  },\n  \"label_names\": [\n    \"American Hairless Terrier imagesize:500x500\",\n    \"Alaskan Malamute imagesize:500x500\",\n    \"American Eskimo Dog imagesize:500x500\",\n    \"Australian Shepherd imagesize:500x500\",\n    \"Boston Terrier imagesize:500x500\",\n    \"Boykin Spaniel imagesize:500x500\",\n    \"Chesapeake Bay Retriever imagesize:500x500\",\n    \"Catahoula Leopard Dog imagesize:500x500\",\n    \"Toy Fox Terrier imagesize:500x500\"\n  ],\n  \"model\": {\n    \"name\": \"\",\n    \"layers\": [\n      {\n        \"name\": \"Conv2d\",\n        \"in_channels\": 3,\n        \"out_channels\": 6,\n        \"kernel_size\": 5\n      },\n      {\n        \"name\": \"ReLU\",\n        \"inplace\": True\n      },\n      {\n        \"name\": \"MaxPool2d\",\n        \"kernel_size\": 2,\n        \"stride\": 2\n      },\n      {\n        \"name\": \"Conv2d\",\n        \"in_channels\": \"auto\",\n        \"out_channels\": 16,\n        \"kernel_size\": 5\n      },\n      {\n        \"name\": \"ReLU\",\n        \"inplace\": True\n      },\n      {\n        \"name\": \"MaxPool2d\",\n        \"kernel_size\": 2,\n        \"stride\": 2\n      },\n      {\n        \"name\": \"Conv2d\",\n        \"in_channels\": \"auto\",\n        \"out_channels\": 32,\n        \"kernel_size\": 5\n      },\n      {\n        \"name\": \"ReLU\",\n        \"inplace\": True\n      },\n      {\n        \"name\": \"MaxPool2d\",\n        \"kernel_size\": 2,\n        \"stride\": 2\n      },\n      {\n        \"name\": \"Flatten\",\n        \"start_dim\": 1\n      },\n      {\n        \"name\": \"Linear\",\n        \"in_features\": 111392,\n        \"out_features\": 120\n      },\n      {\n        \"name\": \"ReLU\",\n        \"inplace\": True\n      },\n      {\n        \"name\": \"Linear\",\n        \"in_features\": \"auto\",\n        \"out_features\": 84\n      },\n      {\n        \"name\": \"ReLU\",\n        \"inplace\": True\n      },\n      {\n        \"name\": \"Linear\",\n        \"in_features\": \"auto\",\n        \"out_features\": \"n_labels\"\n      }\n    ]\n  }\n}\n```\n*Tips for Criterion*\n- `criterion` key is responsible for calling a loss function.\n- If user only provides the name of the criterion(loss function), it will be used without parameters.\n- Some string inputs(especially if the user calls an external class from Pytorch), should be double quoted like `\"'Parameter Value'\"`.\n- User may find the information on the support for [Loss Functions](#loss-functions) later in the documentation.\n\n*Tips for Optimizer*\n- `optimizer` key is responsible for calling an optimizer.\n- If user only provides the name of the optimizer, it will be used without parameters.\n- Some string inputs(especially if the user calls an external class from Pytorch), should be double quoted like `\"'Parameter Value'\"`.\n- User may find the information on the support for [Optimizers](#optimizers) later in the documentation.\n\n*Tips for Image Operations (PIL Image Functions)*\n- `image_ops` key is responsible for calling PIL operations on the input.\n- PIL integration is only supportive for Pytorch Transforms(`transform`, `target_transform` keys) integration. It should be used for secondary purposes. Many of the functions PIL supports is already wrapped in Pytorch Transforms.\n- Each dictionary represents a separate operation.\n- Some string inputs(especially if user calls an external class from PIL), should be double quoted like `\"'Parameter Value'\"`\n- User may find the information on the support for [Optimizers](#optimizers) later in the documentation.\n\n*Tips for Pytorch Transforms*\n- `transform` and `target_transform` keys are both responsible calling Pytorch Transforms. First one is for input, the second one is for label respectively.\n- Transforms integration is the main integration responsible for preprocessing images, and labels before training.\n- Each key in the dictionary represents a separate operation.\n- Order of the keys represent the order of sequential transforms to be applied.\n- Transforms without a parameter should be given the value `True` to be passed.\n- Some string inputs(especially if the user calls an external class from Pytorch), should be double quoted like `\"'Parameter Value'\"`\n- User may find the information on the support for [Transforms](#transforms) later in the documentation.\n\n*Tips for Label Names*\n- `label_names` is responsible for declaring label names.\n- Label Names should be present in the Image Database Storage Server created by the user.\n- If the user provided `height` and `width` of images to be scraped in `add_to_db` or `multiple_queries` endpoints, the label name should be written with an addendum `imagesize:heightxwidth`. Otherwise the images without certain classification will be fetched if they are present in the server.\n- Vectorized versions of labels could be transformed using `target_transform`\n\n*Tips for Model*\n- `model` key is responsible for the calling or creation of a model.\n- If `name` key is provided, a previously defined class name within `models.py` will be called, and `layers` key will be ignored.\n- If `layers` key is provided, and `name` key is not provided, a sequential layer creation will follow.\n- Each dictionary in the `layers` array represents a training layer.\n- User may use `auto` value for the input parameter to automatically get the past output layer in a limited support. For now, it is only supported for same kinds of layers.\n- User may use `n_labels` to indicate the number of labels in the final layer.\n- User may find the information on the support for [Layers](#layers) later in the documentation.\n\n\n# Testing a Model\n## `test`\nUser may test the trained model by fetching random images that have the same classifications as labels.\n\n\u003cdetails\u003e\n\u003csummary\u003eVisual Documentation Playground\u003c/summary\u003e\n\nHead to http://localhost:8000/docs#/default/validationtest_test__post and customize the dictionary:\n![test](https://user-images.githubusercontent.com/73674035/186639699-b9b58c8e-5cc6-44a5-b2f7-9a94b4708d0a.png)\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003eCurl Command with Explanation of Parameters\u003c/summary\u003e\n\n```\ncurl -X 'POST' \\\n  'http://localhost:8000/test/' \\\n  -H 'accept: application/json' \\\n  -H 'Content-Type: application/json' \\\n  -d '{\n  \"ids\": [\n    \u003cExperimental, ids of the specific set of images to be fetched from image database for testing\u003e\n  ],\n  \"limit\": \u003cLimit of how many random images with same classification will be fetched from the database\u003e,\n  \"label_names\": [\n    \u003cShould be the same label names the user picked for training\u003e\n  ],\n  \"n_labels\": 0, ## Should be kept 0, will automatically update in testing process.\n  \"criterion\": {\n   \u003cShould be the loss function the user picked for training\u003e\n  },\n  \"model_name\": \"\u003cShould be the same file name without extension user picked when training\u003e\",\n  \"image_ops\": [\n    \u003cShould be the image operations the user picked for training\u003e\n  ],\n  \"transform\": {\n    \u003cShould be the same input transformation the user picked for training\u003e\n  },\n  \"target_transform\": {\n    \u003cShould be the same label transformation the user picked for training\u003e\n  },\n  \"model\": {\n    \"name\": \u003cShould be the class name the user picked for training\u003e,\n    \"layers\": [\n      \u003cShould be the same layers the user picked for training\u003e\n    ]\n  }\n}'\n```\n\u003c/details\u003e\n\n### Example Dictionary\n```py\n{\n  \"ids\": [\n  ],\n  \"limit\": 200,\n  \"label_names\": [\n    \"American Hairless Terrier imagesize:500x500\",\n    \"Alaskan Malamute imagesize:500x500\",\n    \"American Eskimo Dog imagesize:500x500\",\n    \"Australian Shepherd imagesize:500x500\",\n    \"Boston Terrier imagesize:500x500\",\n    \"Boykin Spaniel imagesize:500x500\",\n    \"Chesapeake Bay Retriever imagesize:500x500\",\n    \"Catahoula Leopard Dog imagesize:500x500\",\n    \"Toy Fox Terrier imagesize:500x500\"\n  ],\n  \"n_labels\": 0,\n  \"criterion\": {\n    \"name\": \"CrossEntropyLoss\"\n  },\n  \"model_name\": \"american_dog_species\",\n  \"image_ops\": [\n    {\n      \"resize\": {\n        \"size\": [\n          500,\n          500\n        ],\n        \"resample\": \"Image.ANTIALIAS\"\n      }\n    },\n    {\n      \"convert\": {\n        \"mode\": \"'RGB'\"\n      }\n    }\n  ],\n  \"transform\": {\n    \"ToTensor\": True,\n    \"Normalize\": {\n      \"mean\": [\n        0.5,\n        0.5,\n        0.5\n      ],\n      \"std\": [\n        0.5,\n        0.5,\n        0.5\n      ]\n    }\n  },\n  \"target_transform\": {\n    \"ToTensor\": True\n  },\n  \"model\": {\n    \"name\": \"\",\n    \"layers\": [\n      {\n        \"name\": \"Conv2d\",\n        \"in_channels\": 3,\n        \"out_channels\": 6,\n        \"kernel_size\": 5\n      },\n      {\n        \"name\": \"ReLU\",\n        \"inplace\": True\n      },\n      {\n        \"name\": \"MaxPool2d\",\n        \"kernel_size\": 2,\n        \"stride\": 2\n      },\n      {\n        \"name\": \"Conv2d\",\n        \"in_channels\": \"auto\",\n        \"out_channels\": 16,\n        \"kernel_size\": 5\n      },\n      {\n        \"name\": \"ReLU\",\n        \"inplace\": True\n      },\n      {\n        \"name\": \"MaxPool2d\",\n        \"kernel_size\": 2,\n        \"stride\": 2\n      },\n      {\n        \"name\": \"Conv2d\",\n        \"in_channels\": \"auto\",\n        \"out_channels\": 32,\n        \"kernel_size\": 5\n      },\n      {\n        \"name\": \"ReLU\",\n        \"inplace\": True\n      },\n      {\n        \"name\": \"MaxPool2d\",\n        \"kernel_size\": 2,\n        \"stride\": 2\n      },\n      {\n        \"name\": \"Flatten\",\n        \"start_dim\": 1\n      },\n      {\n        \"name\": \"Linear\",\n        \"in_features\": 111392,\n        \"out_features\": 120\n      },\n      {\n        \"name\": \"ReLU\",\n        \"inplace\": True\n      },\n      {\n        \"name\": \"Linear\",\n        \"in_features\": \"auto\",\n        \"out_features\": 84\n      },\n      {\n        \"name\": \"ReLU\",\n        \"inplace\": True\n      },\n      {\n        \"name\": \"Linear\",\n        \"in_features\": \"auto\",\n        \"out_features\": \"n_labels\"\n      }\n    ]\n  }\n}\n```\n---\n\n# Getting Information on the Training and Testing of the Model\n## `find_attempt`\nEach time a user uses `train` endpoint, an `Attempt` object is created in the database. This object is also updated on each time `test` endpoint is used. Also, user may automatically check the status of the training from this object.\n\n* At the beginning of each training, the `status` of the object will be `Training`.\n* At the end of each training, the `status` of the object will be `Trained`\n* At the end of each testing, the `status` of the object will be `Complete`\n\n\u003cdetails\u003e\n\u003csummary\u003eVisual Documentation Playground\u003c/summary\u003e\n\nHead to http://localhost:8000/docs#/default/find_attempt_find_attempt__post and enter the name of the model(also the filename without extension): \n![find_attempt](https://user-images.githubusercontent.com/73674035/186650147-8a47bdec-2b9b-45a0-a9ae-9ad9cd76ece2.png)\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003eCurl Command with Explanation of Parameters\u003c/summary\u003e\n\n```\ncurl -X 'POST' \\\n  'http://localhost:8000/find_attempt/?name=\u003cName of the Model(Also the filename of the saved model without extensions)\u003e' \\\n  -H 'accept: application/json' \\\n  -d ''\n```\n\n\u003c/details\u003e\n\n### Example Output Dictionary\n\n```py\n{\n  \"accuracy\": \u003cAccuracy of the Model\u003e,\n  \"id\": \u003cID of the attempt\u003e,\n  \"limit\": \u003cLimit of the number of testing\u003e,\n  \"n_epoch\": \u003cNumber of epochs the model is trained for\u003e,\n  \"name\": \"\u003cName of the Model\u003e\",\n  \"status\": \"\u003cStatus of the Attempt\u003e\",\n  \"testing_commands\": {\n    \u003cSame as testing commands used in `test` endpoint\u003e\n  },\n  \"training_commands\": {\n    \u003cSame as tranining commands used in `train` endpoint\u003e\n  },\n  \"training_losses\": [\n    ## Training Losses for each epoch for observing training quality\n    2.1530826091766357,\n    2.2155375480651855,\n    2.212409019470215,\n    ...\n  ]\n}\n```\n---\n\n# Support for Various Elements\n\nBelow are the different functions, and algorithms supported. Data has been derived from the results of `test_main.py` unit tests. Functions, and algorithms not present in the list may or may not work. Feel free to try them out.\n\n## Layers\n\n\u003cdetails\u003e\n\u003csummary\u003eSupported Pytorch Convolutional Layers\u003c/summary\u003e\n\n- Conv1d\n  - `dtype` and `device` parameters are not supported.\n- Conv2d\n  - `dtype` and `device` parameters are not supported.\n- Conv3d\n  - `dtype` and `device` parameters are not supported.\n- ConvTranspose1d\n  - `dtype` and `device` parameters are not supported.\n- ConvTranspose2d\n  - `dtype` and `device` parameters are not supported.\n- ConvTranspose3d\n  - `dtype` and `device` parameters are not supported.\n- LazyConv1d\n  - `dtype` and `device` parameters are not supported.\n- LazyConv2d\n  - `dtype` and `device` parameters are not supported.\n- LazyConv3d\n  - `dtype` and `device` parameters are not supported.\n- LazyConvTranspose1d\n  - `dtype` and `device` parameters are not supported.\n- LazyConvTranspose2d\n  - `dtype` and `device` parameters are not supported.\n- LazyConvTranspose3d\n  - `dtype` and `device` parameters are not supported.\n- Unfold\n- Fold\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003eUnsupported Pytorch Convolutional Layers\u003c/summary\u003e\n\nNone\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003eSupported Pytorch Pooling Layers\u003c/summary\u003e\n\n- MaxPool1d\n- MaxPool2d\n- MaxPool3d\n- MaxUnpool1d\n- MaxUnpool2d\n- MaxUnpool3d\n- AvgPool1d\n- AvgPool2d\n- AvgPool3d\n- FractionalMaxPool2d\n  - `_random_samples` parameter is not supported.\n- FractionalMaxPool3d\n  - `_random_samples` parameter is not supported.\n- AdaptiveMaxPool1d\n- AdaptiveMaxPool2d\n- AdaptiveMaxPool3d\n- AdaptiveAvgPool1d\n- AdaptiveAvgPool2d\n- AdaptiveAvgPool3d\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003eUnsupported Pytorch Pooling Layers\u003c/summary\u003e\n\n- LPPool1d\n- LPPool2d\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003eSupported Pytorch Linear Layers\u003c/summary\u003e\n\n- Linear\n- Bilinear\n- LazyLinear\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003eUnsupported Pytorch Linear Layers\u003c/summary\u003e\n\n- Identity\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003eSupported Pytorch Utility Functions From Other Modules\u003c/summary\u003e\n\n- Flatten\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003eUnsupported Pytorch Utility Functions From Other Modules\u003c/summary\u003e\n\n- Unflatten\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003eSupported Pytorch Non-Linear Activation Layers\u003c/summary\u003e\n\n- ELU\n- Hardshrink\n  - `lambda` parameter is not supported.\n- Hardsigmoid\n- Hardtanh\n  - `min_value` and `max_value` parameters are same as `min_val` and `max_val` respectively.\n- Hardswish\n- LeakyReLU\n- LogSigmoid\n- MultiheadAttention\n  - `device`, and `dtype` parameters are not supported.\n- PReLU\n  - `device`, and `dtype` parameters are not supported.\n- ReLU\n- ReLU6\n- RReLU\n- SELU\n- CELU\n- GELU\n  - `approximate` parameter is not supported.\n- Sigmoid\n- SiLU\n- Mish\n- Softplus\n- Softshrink\n  - `lambda` parameter is not supported.\n- Softsign\n- Tanh\n- Tanhshrink\n- Threshold\n- GLU\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003eUnsupported Pytorch Non-Linear Activation Layers\u003c/summary\u003e\n\nNone\n\u003c/details\u003e\n\n## Optimizers\n\n\u003cdetails\u003e\n\u003csummary\u003eSupported Pytorch Optimizer Algorithms\u003c/summary\u003e\n\n- Adadelta\n- Adagrad\n- Adam\n- AdamW\n- Adamax\n- ASGD\n- NAdam\n- RAdam\n- RMSprop\n- Rprop\n- SGD\n\n`foreach`, `maximize`, and `capturable` parameters have been deprecated.\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003eUnsupported Pytorch Optimizer Algorithms\u003c/summary\u003e\n\n- LBFGS\n\u003c/details\u003e\n\n## Loss Functions\n\n\u003cdetails\u003e\n\u003csummary\u003eSupported Pytorch Loss Functions\u003c/summary\u003e\n\n- L1Loss\n- MSELoss\n- CrossEntropyLoss\n  - `weight`, and `ignore_index` parameters are not supported yet.\n- PoissonNLLLoss\n- KLDivLoss\n- BCEWithLogitsLoss\n  - `weight`, and `pos_weight` parameters are not supported yet.\n- HingeEmbeddingLoss\n- HuberLoss\n- SmoothL1Loss\n- SoftMarginLoss\n- MultiLabelSoftMarginLoss\n  - `weight` parameter is not supported yet.\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003eUnsupported Pytorch Loss Functions\u003c/summary\u003e\n\n- CTCLoss\n- NLLLoss\n- GaussianNLLLoss\n- BCELoss\n- MarginRankingLoss\n- MultiLabelMarginLoss\n- CosineEmbeddingLoss\n- MultiMarginLoss\n- TripletMarginLoss\n- TripletMarginWithDistanceLoss\n\u003c/details\u003e\n\n## Transforms\n\n\u003cdetails\u003e\n\u003csummary\u003eSupported Pytorch Transforms\u003c/summary\u003e\n\n- CenterCrop\n- ColorJitter\n- FiveCrop\n- Grayscale\n- Pad\n- RandomAffine\n- RandomCrop\n- RandomGrayscale\n- RandomHorizontalFlip\n- RandomPerspective\n- RandomResizedCrop\n- RandomRotation\n- RandomVerticalFlip\n- Resize\n- TenCrop\n- GaussianBlur\n- RandomInvert\n- RandomPosterize\n- RandomSolarize\n- RandomAdjustSharpness\n- RandomAutocontrast\n- RandomEqualize\n- Normalize\n- RandomErasing\n- ToPILImage\n- ToTensor\n- PILToTensor\n- RandAugment\n- TrivialAugmentWide\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003eUnsupported Pytorch Transforms\u003c/summary\u003e\n\n- RandomApply\n- RandomChoice\n- RandomOrder\n- LinearTransformation\n- ConvertImageDtype\n- Lambda\n- AutoAugmentPolicy\n- AutoAugment\n- AugMix\n- All Functional Transforms\n\u003c/details\u003e\n\n## Image Operations\n\n\u003cdetails\u003e\n\u003csummary\u003eSupported Image Operations (Functions from PIL Image Module Image Class)\u003c/summary\u003e\n\n- convert\n- crop\n- effect_spread\n- getchannel\n- reduce\n- resize\n- rotate\n- transpose\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003eUnsupported Image Operations (Functions from PIL Image Module Image Class)\u003c/summary\u003e\n\n- alpha_composite\n- apply_transparency\n- copy\n- draft\n- entropy\n- filter\n- frombytes\n- point\n- quantize\n- remap_palette\n- transform\n- Any other function that doesn't return an Image object\n\n\u003c/details\u003e\n\n---\n\n# Keypoints for the State of the Machine Learning Tool and Its Future Roadmap\n\n- For now, the scope of this software only supports image datasets, and the aim is to create image-classifying machine learning models at scale. The broader purpose is to achieve better computer vision by scalability. Future plans include adding the other basic input tensor types for data science, data analysis, data analytics, or artificial intelligence projects. The open source software could be repurposed to achieve other kinds of tasks such as regression, natural language processing, or any other popular machine learning use cases.\n\n- There are no future plans to support any other programming languages such as Java, Javascript, C/C++, etc. The only supported language will be Python for the foreseeable future. The ability to support other efficient databases on big data such as SQL on Hadoop could be a topic for discussion. Also, the ability to add multiple images from local storage to the storage server is in the future plans.\n\n- The only Machine Learning framework supported is Pytorch. There are plans to extend support for some other machine learning libraries and software such as Tensorflow, Keras, Scikit-Learn, Apache Spark, Scipy, Apache Mahout, Accord.NET, Weka, etc. in the future. Already used libraries such as google-image-results, NumPy, etc. may be utilized further in the future.\n\n- To keep the software user-friendly, the device to train the model on (GPU(CUDA), or CPU) is automatically selected. Also, there are plans to create data visualizations of different models in interactive graphs that can be understood by seasoned data scientists or beginners alike in the future. The drag-and-drop type machine learning software libraries for model creation are not anticipated to be implemented.\n\n- This is open-source software designed for local use. The effects or cost of deployment to cloud servers such as AWS, Google Cloud, etc., or integrating it for machine learning applications with the cloud solutions such as Amazon Sagemaker, IBM Watson, Microsoft’s Azure Machine Learning, and Jupyter Notebook hasn’t been tested yet. Use it at your own discretion. The future plans include some of the large-scale ml tools to be implemented.\n\n- The workflows for future plans above may or may not be implemented depending on the schedule of events, support from other contributors, and its overall use in automation. Multiple machine learning projects with a tutorial will be released explaining machine learning tools.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fserpapi%2Fautomatic-images-classifier-generator","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fserpapi%2Fautomatic-images-classifier-generator","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fserpapi%2Fautomatic-images-classifier-generator/lists"}