{"id":20031752,"url":"https://github.com/astrodynamic/multilayerperceptron-in-qt-cpp","last_synced_at":"2025-03-02T05:24:39.923Z","repository":{"id":155833131,"uuid":"596373687","full_name":"Astrodynamic/MultilayerPerceptron-in-Qt-CPP","owner":"Astrodynamic","description":" MultilayerPerceptron Project is a C++ implementation of a multilayer perceptron capable of classifying handwritten Latin alphabet images with 2 to 5 hidden layers. Built with the MVC pattern and Qt library, it requires C++17, CMake, Qt5 Widgets/Charts, and Google Test library. The program can be customized and features options.","archived":false,"fork":false,"pushed_at":"2023-05-09T02:57:05.000Z","size":356,"stargazers_count":3,"open_issues_count":0,"forks_count":0,"subscribers_count":2,"default_branch":"develop","last_synced_at":"2025-01-12T18:09:37.752Z","etag":null,"topics":["cmake","cpp","cpp-programming","cpp17","gui","image-classification","machine-learning","makefile","mlp","multilayer-perceptron","multilayer-perceptron-network","neural-network","qt","ui"],"latest_commit_sha":null,"homepage":"","language":"C++","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/Astrodynamic.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-02-02T03:13:31.000Z","updated_at":"2024-06-21T09:55:02.000Z","dependencies_parsed_at":null,"dependency_job_id":"c31317fc-ccf5-4da4-8b7d-05aca9dc4385","html_url":"https://github.com/Astrodynamic/MultilayerPerceptron-in-Qt-CPP","commit_stats":null,"previous_names":["astrodynamic/multilayerperceptron-in-qt-cpp"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Astrodynamic%2FMultilayerPerceptron-in-Qt-CPP","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Astrodynamic%2FMultilayerPerceptron-in-Qt-CPP/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Astrodynamic%2FMultilayerPerceptron-in-Qt-CPP/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Astrodynamic%2FMultilayerPerceptron-in-Qt-CPP/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/Astrodynamic","download_url":"https://codeload.github.com/Astrodynamic/MultilayerPerceptron-in-Qt-CPP/tar.gz/refs/heads/develop","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":241462634,"owners_count":19966896,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["cmake","cpp","cpp-programming","cpp17","gui","image-classification","machine-learning","makefile","mlp","multilayer-perceptron","multilayer-perceptron-network","neural-network","qt","ui"],"created_at":"2024-11-13T09:34:36.591Z","updated_at":"2025-03-02T05:24:39.896Z","avatar_url":"https://github.com/Astrodynamic.png","language":"C++","readme":"# MultilayerPerceptron Project\n\nThis project is an implementation of a multilayer perceptron in C++ language using C++17 standard. The perceptron is able to classify images with handwritten letters of the Latin alphabet and has from 2 to 5 hidden layers. The program is built using the MVC pattern and the GUI implementation is based on the Qt library.\n\n## Table of Contents\n- [Dependencies](#dependencies)\n- [Build and Installation](#build-and-installation)\n- [Usage](#usage)\n- [License](#license)\n\n## Dependencies\n\nThe following dependencies are required to build and run this project:\n- C++17\n- CMake\n- Qt5 Widgets\n- Qt5 Charts\n- Google Test library\n\n## Build and Installation\n\nTo build and install this project, please follow the instructions below:\n\n1. Clone this repository to your local machine.\n2. Open a terminal and navigate to the project directory.\n3. Run `cmake -S . -B ./build` to generate the build files.\n4. Run `cmake --build ./build` to build the project.\n5. Run `./build/MLP` to launch the program.\n\nIf you want to uninstall the project, you can run `find ./ -name \"build\" -type d -exec rm -rf {} +`.\n\n## Usage\n\n### Running the Program\n\nTo run the program, please follow the instructions below:\n\n1. Launch the program by running `./build/MLP`.\n2. Click on the \"Load Dataset\" button to load the dataset.\n3. Click on the \"Train\" button to train the perceptron.\n4. Click on the \"Test\" button to test the perceptron.\n5. Use the other buttons and input fields to customize the settings of the perceptron.\n\n### Saving and Loading Weights\n\nTo save or load weights of the perceptron, please follow the instructions below:\n\n1. Click on the \"Save Weights\" button to save the current weights of the perceptron to a file.\n2. Click on the \"Load Weights\" button to load the weights of the perceptron from a file.\n\n### Drawing Images\n\nTo draw images, please follow the instructions below:\n\n1. Click on the \"Draw Image\" button to open the drawing window.\n2. Draw an image by clicking and dragging the mouse.\n3. Click on the \"Classify\" button to classify the drawn image.\n\n### Real-Time Training\n\nTo start the real-time training process, please follow the instructions below:\n\n1. Click on the \"Real-Time Training\" button to open the training window.\n2. Input the number of epochs to train for and click on the \"Start\" button.\n3. The error control values for each training epoch will be displayed in the graph.\n\n### Cross-Validation\n\nTo run the training process using cross-validation, please follow the instructions below:\n\n1. Click on the \"Cross-Validation\" button to open the cross-validation window.\n2. Input the number of groups k to use and click on the \"Start\" button.\n3. The average accuracy, precision, recall, f-measure, and total time spent on the experiment will be displayed on the screen.\n\n## License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n\u003ch1 align=\"center\"\u003e MLP \u003c/h1\u003e\n\u003ch2 align=\"center\"\u003e Main window view \u003c/h2\u003e\n\u003cimg src=\"data/mainwindow.png\"\u003e\n\n1. Basic application settings\n2. Perceptron settings area\n3. Perceptron learning control area\n4. Manual image input area\n5. Image processing result\n6. Brush settings area for manual input zone\n7. Trained network statistics\n\n\u003cp\u003e All the above-mentioned entities can be manipulated in various ways (displayed over the display area, closed/opened, resized and relocated), for example, like this: \u003c/p\u003e\n\n\u003cimg src=\"data/randomplace.png\"\u003e\n\n\u003chr\u003e\n\n\u003ch2 align=\"center\"\u003e Basic settings \u003c/h2\u003e\n\u003ch3\u003e\u003cb\u003e File tab: \u003c/b\u003e\u003c/h3\u003e\n\n1. Load network edge weights from file\n2. Save network edge weights to file\n3. Load image to manual input area for recognition\n\n\u003chr\u003e\n\n\u003ch3\u003e\u003cb\u003e Window tab: \u003c/b\u003e\u003c/h3\u003e\n\n1. Display brush settings area\n2. Display image processing result\n3. Display Perceptron settings\n4. Display trained network statistics\n\n\u003chr\u003e\n\u003ch3\u003e\u003cb\u003e Test MLP tab: \u003c/b\u003e\u003c/h3\u003e\n\n1. Load data for network training\n2. Load data for network testing\n\n\u003chr\u003e\n\n\u003ch2 align=\"center\"\u003e Perceptron settings area \u003c/h2\u003e\n\n1. The ability to change the type of perceptron (matrix and graph)\n2. Ability to change the number of training epochs\n3. Ability to change the Learning Rate\n4. Ability to use cross-validation\n5. Change the number of hidden layers of the Perceptron (from 2 to 5) and their depth\n\n\u003cp\u003e Note: the user cannot change the initial and final layers. \u003c/p\u003e\n\u003chr\u003e\n\n\u003ch2 align=\"center\"\u003e Perceptron learning control area \u003c/h2\u003e\n\u003cp\u003e In this program block, the user can observe the process of training and testing the network in real time. \u003c/p\u003e\n\u003chr\u003e\n\u003ch2 align=\"center\"\u003e Manual image input area \u003c/h2\u003e\n\n1. When the LMB is pressed, an image is created according to the mouse movements in the specified area.\n2. When the RMB is pressed, the image is completely erased (the area is filled with white).\n\u003chr\u003e\n\n\u003ch2 align=\"center\"\u003e Image processing result \u003c/h2\u003e\n\u003cp\u003e Displays a chart with the result of image processing from the manual input area, the result may be ambiguous, i.e. the network will find several matches and based on the chart you can see which option it leans towards more. \u003c/p\u003e\n\u003chr\u003e\n\u003ch2 align=\"center\"\u003e Brush settings area \u003c/h2\u003e\n\n1. Ability to choose brush mode:\n2. Brush - paintbrush (draws)\n3. Erase - eraser (erases)\n4. Ability to choose brush width (from 1 to 100)\n\n\u003chr\u003e\n\u003ch2 align=\"center\"\u003e Statistics of the Trained Network \u003c/h2\u003e\n\n\u003ctable align=\"center\"\u003e\n\u003ctr\u003e \n    \u003ctd\u003e \u003cimg src=\"data/MLPSTAT.png\"\u003e \u003c/td\u003e\n\u003c/tr\u003e\n\u003ctr\u003e\n    \u003ctd\u003e\n    Shows: \u003cul\u003e\n    \u003cli\u003eAverage accuracy\n    \u003cli\u003ePrecision\n    \u003cli\u003eError rate\n    \u003cli\u003eRecall\n    \u003cli\u003eTraining time\n    \u003cli\u003eTesting time\n    \u003cli\u003eError plot\n    \u003c/ul\u003e\u003c/td\u003e\n\u003c/tr\u003e\n\u003c/table\u003e\n\n\u003chr\u003e\n\n\u003ch2 align=\"center\"\u003e Research \u003c/h2\u003e\n\n\u003ctable align=\"center\"\u003e\n    \u003ctr\u003e\n        \u003ctd\u003e\u003c/td\u003e\n        \u003ctd\u003e10 runs\u003c/td\u003e\n        \u003ctd\u003e100 runs\u003c/td\u003e\n        \u003ctd\u003e1000 runs\u003c/td\u003e\n        \u003ctd\u003eAverage runtime per run\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd\u003eMatrix Perceptron\u003c/td\u003e\n        \u003ctd\u003e3510 sec\u003c/td\u003e\n        \u003ctd\u003e35100 sec\u003c/td\u003e\n        \u003ctd\u003e351000 sec\u003c/td\u003e\n        \u003ctd\u003e351 sec\u003c/td\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n        \u003ctd\u003eGraph Perceptron\u003c/td\u003e\n        \u003ctd\u003e5940 sec\u003c/td\u003e\n        \u003ctd\u003e59400 sec\u003c/td\u003e\n        \u003ctd\u003e594000 sec\u003c/td\u003e\n        \u003ctd\u003e594 sec\u003c/td\u003e\n    \u003c/tr\u003e\n\u003c/table\u003e\n\n\u003chr\u003e\n\n\u003ch2 align=\"center\"\u003e Program Output Examples \u003c/h2\u003e\n\n\u003cimg align=\"center\" src=\"data/result1.png\"\u003e\n\n\u003chr\u003e\n\n\u003cimg align=\"center\" src=\"data/result2.png\"\u003e\n\n\u003chr\u003e\n\n\u003cimg align=\"center\" src=\"data/result3.png\"\u003e\n\n\u003chr\u003e\n\n\u003cimg align=\"center\" src=\"data/result4.png\"\u003e\n\n\u003chr\u003e\n\n\u003cimg align=\"center\" src=\"data/result5.png\"\u003e\n\n\u003chr\u003e","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fastrodynamic%2Fmultilayerperceptron-in-qt-cpp","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fastrodynamic%2Fmultilayerperceptron-in-qt-cpp","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fastrodynamic%2Fmultilayerperceptron-in-qt-cpp/lists"}