{"id":25031780,"url":"https://github.com/centuriontheman/parallelsortingalgorithms","last_synced_at":"2026-02-18T15:31:09.136Z","repository":{"id":258635073,"uuid":"867719479","full_name":"CenturionTheMan/ParallelSortingAlgorithms","owner":"CenturionTheMan","description":"Cpp/CUDA application for benchmarking sorting algorithms","archived":false,"fork":false,"pushed_at":"2024-12-10T02:36:08.000Z","size":24967,"stargazers_count":0,"open_issues_count":1,"forks_count":0,"subscribers_count":2,"default_branch":"main","last_synced_at":"2025-10-19T23:59:18.787Z","etag":null,"topics":["benchamark","cpp","cuda","multithreading","sorting-algorithms"],"latest_commit_sha":null,"homepage":"","language":"C++","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/CenturionTheMan.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-10-04T15:31:29.000Z","updated_at":"2025-01-21T12:50:43.000Z","dependencies_parsed_at":"2024-10-27T23:35:32.544Z","dependency_job_id":"703483ad-19f3-4ef9-8d39-be961bbed3c8","html_url":"https://github.com/CenturionTheMan/ParallelSortingAlgorithms","commit_stats":null,"previous_names":["centuriontheman/parallelsortingalgorithms"],"tags_count":1,"template":false,"template_full_name":null,"purl":"pkg:github/CenturionTheMan/ParallelSortingAlgorithms","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/CenturionTheMan%2FParallelSortingAlgorithms","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/CenturionTheMan%2FParallelSortingAlgorithms/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/CenturionTheMan%2FParallelSortingAlgorithms/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/CenturionTheMan%2FParallelSortingAlgorithms/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/CenturionTheMan","download_url":"https://codeload.github.com/CenturionTheMan/ParallelSortingAlgorithms/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/CenturionTheMan%2FParallelSortingAlgorithms/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":29583916,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-02-18T13:56:48.962Z","status":"ssl_error","status_checked_at":"2026-02-18T13:54:34.145Z","response_time":162,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.5:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["benchamark","cpp","cuda","multithreading","sorting-algorithms"],"created_at":"2025-02-05T22:44:49.472Z","updated_at":"2026-02-18T15:31:09.121Z","avatar_url":"https://github.com/CenturionTheMan.png","language":"C++","funding_links":[],"categories":[],"sub_categories":[],"readme":"# ParallelSortingAlgorithms\n\n- [ParallelSortingAlgorithms](#parallelsortingalgorithms)\n  - [Benchmarking tool](#benchmarking-tool)\n    - [Functionalities](#functionalities)\n      - [Time measurement](#time-measurement)\n      - [Saving results](#saving-results)\n      - [Customizable configuration](#customizable-configuration)\n      - [Verification of solutions](#verification-of-solutions)\n    - [How to run benchmarking tool (Windows)?](#how-to-run-benchmarking-tool-windows)\n  - [Developer's manual](#developers-manual)\n    - [Sorting function convention](#sorting-function-convention)\n    - [Benchmarking tool flowchart](#benchmarking-tool-flowchart)\n    - [Directory structure](#directory-structure)\n    - [Testing](#testing)\n      - [Windows](#windows)\n      - [VS Code](#vs-code)\n    - [Debugging (in VS Code)](#debugging-in-vs-code)\n  - [Graph-generating scripts](#graph-generating-scripts)\n    - [Prerequisites](#prerequisites)\n    - [Usage](#usage)\n  - [Resources](#resources)\n\n## Benchmarking tool\n\nThis project includes a source code for CLI benchmarking tool that allows you to measure time complexities of *Bitonic* and *Odd-Even* array sorting algorithms implemented for both CPU (plain *C++*) and GPU (*C++* \u0026 *CUDA*).\n\n### Functionalities\n\n#### Time measurement\n\nTool allows to measure time complexities for CPU and GPU implementations of *Bitonic* and *Odd-Even* array sorting algorithms. The measurement would be performed for different sizes of randomly-generated or predefined sorting problem instances. Each instance size would be measured multiple times in order to calculate an average result.\n\nWhen measurement is done the average results and standard derivation for each instance size would be printed in following tabular format:\n\n```\n\u003e\u003e\u003e STARTING BENCHMARK...\n\n#=========================================#=========================#==========================#==========================#\n| Instance size |       CPU Bitonic       |       GPU Bitonic       |       CPU Odd-Even       |       GPU Odd-Even       |\n#=========================================#=========================#==========================#==========================#\n|       1000000 | 1.23e+456 (1.23e+456) s | 1.23e+456 (1.23e+456) s |  1.23e+456 (1.23e+456) s |  1.23e+456 (1.23e+456) s |\n|      10000000 | 1.23e+456 (1.23e+456) s | 1.23e+456 (1.23e+456) s |  1.23e+456 (1.23e+456) s |  1.23e+456 (1.23e+456) s |\n|     100000000 | 1.23e+456 (1.23e+456) s | 1.23e+456 (1.23e+456) s |  1.23e+456 (1.23e+456) s |  1.23e+456 (1.23e+456) s |\n|    1000000000 | 1.23e+456 (1.23e+456) s | 1.23e+456 (1.23e+456) s |  1.23e+456 (1.23e+456) s |  1.23e+456 (1.23e+456) s |\n#=========================================#=========================#==========================#==========================#\n\n\u003e\u003e\u003e BENCHMARK COMPLETE!\n```\n\n#### Saving results\n\nSystem saves measurement results for each repetition in an output file named `result.csv` located in tool's parent directory. Each line is associated with one repetition and contains following semicolon-separated (`;`) fields:\n\n1. Instance size (integer number)\n2. Execution time for *Bitonic Sort* implemented on CPU (real number with `.` decimal point or empty when measurement was disabled)\n3. Execution time for *Bitonic Sort* implemented on GPU (real number with `.` decimal point or empty when measurement was disabled)\n4. Execution time for *Odd-Even Sort* implemented on CPU (real number with `.` decimal point or empty when measurement was disabled)\n5. Execution time for *Odd-Even Sort* implemented on GPU (real number with `.` decimal point or empty when measurement was disabled)\n\nWhat's more the header would be also saved to the first line of `result.csv`.\n\n#### Customizable configuration\n\nTool would allow user to specify custom configuration in the `configuration.ini` file. This file must be located in tool's parent directory. Each line of configuration file contains a key-value pairs with format `key=value`. All of the required keys are listed below:\n\n- `measure_cpu`: Boolean value that defines if measurement for CPU implementations would be performed.\n- `measure_gpu`: Boolean value that defines if measurement for GPU implementations would be performed.\n- `measure_bitonic`: Boolean value that defines if measurement for *Bitonic Sort* would be performed.\n- `measure_odd_even`: Boolean value that defines if measurement for *Odd-Even Sort* would be performed.\n- `verify_result`: Boolean value that defines if sorting validity would be checked.\n\nOther lines in this section may contain data for instances that should be measured. Each line contains a key-value pair where key is an `random_instance` or `predefined_instance` keyword. For `random_instance` key the value is a space-separated pair of instance size and number of measurement repetitions for it. For `predefined_instance` key the value is a number of repetitions for instance followed by the integers that are a part of instance itself (all space-separated).\n\nThe configuration file may also contain empty lines.\n\nExample of a valid configuration file is shown below:\n\n```ini\nmeasure_cpu=0\nmeasure_gpu=1\nmeasure_bitonic=1\nmeasure_odd_even=1\nverify_result=1\n\nrandom_instance=50000 10\nrandom_instance=10000000 56\nrandom_instance=199 56\n\npredefined_instance=4 -1 88 2 9 4 105 1 34\n```\n\nTool would print current configuration during startup. Example of such print for previously presented configuration file is shown below:\n\n```\n\u003e\u003e\u003e CONFIGURATION LOADED\n\nCPU measurement                 OFF\nGPU measurement                 ON\nBitonic Sort measurement        ON\nOdd-Even Sort measurement       ON\nResults verification            ON\n\nDefined instances               4\n\n```\n\n#### Verification of solutions\n\nFor each repetition solutions from all of the implementations are verified. If implementation sorted instance properly then nothing special happens. Otherwise the tool would be terminated and information about an error would be printed to console (example below).\n\n```\n\u003e\u003e\u003e STARTING BENCHMARK...\n\n#=========================================#=========================#==========================#==========================#\n| Instance size |       CPU Bitonic       |       GPU Bitonic       |       CPU Odd-Even       |       GPU Odd-Even       |\n#=========================================#=========================#==========================#==========================#\n|       1000000 | 1.23e+456 (1.23e+456) s | 1.23e+456 (1.23e+456) s |  1.23e+456 (1.23e+456) s |  1.23e+456 (1.23e+456) s |\n\n\u003e\u003e\u003e BENCHMARK TERMINATED!\n\u003e\u003e\u003e The GPU Odd-Even has given an invalid solution for instance size 100000 in repetition 4.\n\u003e\u003e\u003e Please check the \"error.log\" file.\n```\n\nDetails about an error would be saved to the `error.log` file that would be located in the tool's parent directory. This details are:\n\n- Error message from console.\n- List of space-separated integers that are part of problematic instance.\n- Invalidly sorted instance in a form of space-separated integers that are a part of problematic instance.\n\nExample of `error.log` content is shown below:\n\n```\n\u003e\u003e\u003e The GPU Odd-Even has given an invalid solution for instance size 10  in repetition 4.\n[Instance]: 0 5 8 1 3 5 8\n[Solution]: 0 1 3 8 5 8 5\n```\n\n### How to run benchmarking tool (Windows)?\n\nIn order to run benchmarking tool you need to first build it from the source files. The *CMake* tool is recommended an easiest way to do this since you only have to run te `build.bat` script (see command below).\n\n```cmd\n.\\scripts\\build.bat\n```\n\nWhen the build is finished both `benchmarking_tool.exe` and `debug_benchmarking_tool.exe` files would appear in the project's main directory. The `benchmarking_tool.exe` is the *release* build and the `debug_benchmarking_tool.exe` is the *debug* build. For the measurement purposes you should really use only the *release* build since it's optimized. You can run benchmarking tool with the following command:\n\n```cmd\n.\\benchmarking_tool.exe\n```\n\n## Developer's manual\n\n### Sorting function convention\n\nEach sorting algorithm implementation should be put inside a `\u003carchitecture\u003e\u003calgorithm\u003eSort()` function, where `\u003cimplementation\u003e` is a placeholder for achitecture (CPU/GPU) name and `\u003calgorithm\u003e` is a placeholder for algorithm (*Bitonic*/*Odd-Even*) name. Function's name should be written in camel case. Example of valid function's header is shown below.\n\n```cpp\nvoid GpuBitonicSort(std::vector\u003cint\u003e\u0026 array);\n```\n\nIt should take an array that would be sorted. Sorted array would be placed into the same memory location as the original array.\n\n### Benchmarking tool flowchart\n\n![benchmarking tool flowchart](./gfx/flowchart.svg)\n\n### Directory structure\n\n- `gfx`: All graphics used in the repo.\n- `benchmarking_tool`: source codes and headers of the benchmarking tools\n    - `headers`: headers for benchmarking tools source code\n    - `src`: source files for benchmarking tools source code\n- `tests`: folder for unit and integration tests\n- `scripts`: utility scripts for building and testing of the benchmarking tool as well as some scripts that would help to prepare graphs\n- `results`: saved `*.csv` files with benchmarking results\n\n### Testing\n\nAll tests should be placed in the `test` directory. The testing tool for the project is the *Google Test* library.\n\n#### Windows\n\nIn order to run test regression you need to use the `test.bat` scrip. Please take note that it requires *CMake* to be installed on your system. This script would build the tool and then run the *Google Test* regression.\n\n```cmd\n.\\scripts\\test.bat\n```\n\n#### VS Code\n\nIn order to run tests in the *VS Code* you need to install both *CMake* and extensions such as *CMake Extension*, *CMake Tools Extension* and *CMake Language Support Extension*. This would add the *Testing* tab to your IDE where you can run tests.\n\n\u003e ⚠️ Remember that you need to refresh the tab first in order to build tests' code! Only after that you can actually run the regression.\n\n![testing in vscode](./gfx/testing.png)\n\n### Debugging (in VS Code)\n\nIn order to debug the tool with *VS Code* you need to install both *CMake* and extensions such as *CMake Extension*, *CMake Tools Extension* and *CMake Language Support Extension*. This would add another tab to your IDE called *CMake*, where you can run the debugger with appropriate context menu (see figure below).\n\n![how to run debugger](./gfx/cmake-debug.png)\n\n## Graph-generating scripts\n\n### Prerequisites\n\nIn order to use graph-generating scripts you need to create a *Python 3.12* virtual environment.\n\n```bash\npython3 -m venv .venv\n```\n\nThen you need to activate this environment and install all required dependencies from the `requirements.txt`.\n\nFor *linux*:\n\n```bash\nsource .venv/bin/acivate\n```\n\n```bash\npip install -r requirements.txt\n```\n\nFor *Windows*:\n\n```cmd\n.\\.venv\\Scripts\\activate.bat\n```\n\n```bash\npip install -r requirements.txt\n```\n\n### Usage\n\nThe tool comes with a handful of *Python 3.12* scripts that allow to generate various graph for:\n\n- Comparison of all implementations (based on one `result.csv` file).\n\n  ```bash\n  python3 scripts/graph_generators/compare_all_implementations.py PATH_TO_RESULTS_FILE\n  ```\n\n- Comparison of two benchmarks for one of implementations (based on two `result.csv` files).\n\n  ```bash\n  python3 scripts/graph_generators/compare_one_implementation.py PATH_TO_RESULTS_FILE\n  ```\n\n- Comparison of CPU and GPU implementations for one algorithm (based on one `results.csv` file).\n\n  ```bash\n  python3 scripts/graph_generators/compare_cpu_gpu_for_algorithm.py PATH_TO_RESULTS_FILE\n  ```\n\n- Presentation of time complexity for one of implementations (based on one `results.csv` file).\n\n  ```bash\n  python3 scripts/graph_generators/implementation_time_complexity.py PATH_TO_RESULTS_FILE\n  ```\n\nAll of those scripts present an appropriate graph and saves it to a PDF file to the `gfx/plots/` directory.\n\n## Resources\n\n- [Project's board on Trello](https://trello.com/b/PZKg8jf4/gpuwt0910zrownoleglonealgorytmysortowania)\n- [Documentation on _Overleaf_](https://www.overleaf.com/project/67269f0e5f7564bafb402362)\n","project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcenturiontheman%2Fparallelsortingalgorithms","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fcenturiontheman%2Fparallelsortingalgorithms","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcenturiontheman%2Fparallelsortingalgorithms/lists"}