{"id":20433765,"url":"https://github.com/intelpython/dpbench","last_synced_at":"2025-04-12T21:06:40.823Z","repository":{"id":37049408,"uuid":"333135583","full_name":"IntelPython/dpbench","owner":"IntelPython","description":"Benchmark suite to evaluate Data Parallel Extensions for Python","archived":false,"fork":false,"pushed_at":"2024-09-05T20:31:23.000Z","size":2062,"stargazers_count":17,"open_issues_count":13,"forks_count":19,"subscribers_count":7,"default_branch":"main","last_synced_at":"2025-04-12T21:06:34.677Z","etag":null,"topics":["benchmark","dpctl","dpnp","numba","numba-dpex","numpy","performance"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/IntelPython.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":"SECURITY.md","support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2021-01-26T15:54:26.000Z","updated_at":"2024-09-05T20:31:26.000Z","dependencies_parsed_at":"2023-12-09T08:21:56.298Z","dependency_job_id":"94a92c64-db7a-4545-9973-4b95c517d03e","html_url":"https://github.com/IntelPython/dpbench","commit_stats":null,"previous_names":[],"tags_count":2,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/IntelPython%2Fdpbench","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/IntelPython%2Fdpbench/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/IntelPython%2Fdpbench/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/IntelPython%2Fdpbench/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/IntelPython","download_url":"https://codeload.github.com/IntelPython/dpbench/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248631684,"owners_count":21136562,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["benchmark","dpctl","dpnp","numba","numba-dpex","numpy","performance"],"created_at":"2024-11-15T08:20:59.187Z","updated_at":"2025-04-12T21:06:40.799Z","avatar_url":"https://github.com/IntelPython.png","language":"Python","readme":"\u003c!--\nSPDX-FileCopyrightText: 2022 - 2023 Intel Corporation\n\nSPDX-License-Identifier: Apache-2.0\n--\u003e\n\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n[![pre-commit](https://github.com/IntelPython/dpbench/actions/workflows/pre-commit.yml/badge.svg)](https://github.com/IntelPython/dpbench/actions/workflows/pre-commit.yml)\n\n# DPBench - Benchmarks to evaluate Data-Parallel Extensions for Python\n\n* **\\\u003cbenchmark\\\u003e\\_numba\\_\\\u003cmode\\\u003e.py** : This file contains Numba implementations of the benchmarks. There are three modes: nopython-mode, nopython-mode-parallel and nopython-mode-parallel-range.\n* **\\\u003cbenchmark\\\u003e\\_numba_dpex\\_\\\u003cmode\\\u003e.py** : This file contains Numba-Dpex implementations of the benchmarks. There are three modes: kernel-mode, numpy-mode and prange-mode.\n* **\\\u003cbenchmark\\\u003e\\_dpnp\\_\\\u003cmode\\\u003e.py** : This file contains dpnp implementations of the benchmarks.\n* **\\\u003cbenchmark\\\u003e\\_native_ext/\\\u003cbenchmark\\\u003e\\_sycl/_\\\u003cbenchmark\\\u003e_kernel.hpp** : This file contains native dpcpp implementations of the benchmarks.\n* **\\\u003cbenchmark\\\u003e\\_numpy.py** : This file contains numpy implementations of the benchmarks. It should take benefits of numpy arrays and should avoid loops over arrays.\n* **\\\u003cbenchmark\\\u003e\\_python.py** : This file contains naive python implementations of the benchmarks. Should be run only for small presets, otherwise it will take long execution time.\n* **\\\u003cbenchmark\\\u003e\\_numba_mlir\\_\\\u003cmode\\\u003e.py** : This file contains Numba-MLIR implementations of the benchmarks. There are three modes: kernel-mode, numpy-mode and prange-mode. Experimental.\n\n## Examples of setting up and running the benchmarks\n\n### Using prebuilt version\n\n1. Create conda environment\n\n    ```bash\n    conda create -n dpbench dpbench -c dppy/label/dev -c conda-forge -c https://software.repos.intel.com/python/conda -c nodefaults --override-channels\n    conda activate dpbench\n    ```\n\n2. Run specific benchmark, e.g. black_scholes\n\n    ```bash\n    dpbench -b black_scholes run\n    ```\n\n### Build from source (for development)\n\n1. Clone the repository\n\n    ```bash\n    git clone https://github.com/IntelPython/dpbench\n    cd dpbench\n    ```\n\n2. Setting up conda environment and installing dependencies:\n\n    ```bash\n    conda env create -n dpbench -f ./environments/conda.yml\n    ```\n\n    If you want to build sycl benchmarks as well:\n    ```bash\n    conda env create -n dpbench -f ./environments/conda-linux-sycl.yml\n    ```\n\n3. Build DPBench\n\n    ```bash\n    pip install --no-index --no-deps --no-build-isolation -e . -v\n    ```\n\n    Alternatively you can build it with `setup.py`, but pip version is preferable:\n\n    ```bash\n    python setup.py develop\n    ```\n\n    For sycl build use:\n    ```bash\n    CC=icx CXX=icpx DPBENCH_SYCL=1 pip install --no-index --no-deps --no-build-isolation -e . -v\n    ```\n\n    or\n\n    ```bash\n    CC=icx CXX=icpx DPBENCH_SYCL=1 python setup.py develop\n    ```\n\n4. Run specific benchmark, e.g. black_scholes\n\n    ```bash\n    dpbench -b black_scholes run\n    ```\n\n### Usage\n\n1. Run all benchmarks\n\n    ```bash\n    dpbench -a run\n    ```\n\n2. Generate report\n\n    ```bash\n    dpbench report\n    ```\n\n3. Device Customization\n\n   If a framework is SYCL based, an extra configuration option\n   `sycl_device` may be set in the framework config file or by passing\n   `--sycl-device` argument to `dpbench run` to control what device\n   the framework uses for execution. The `sycl_device` value should be\n   a legal [SYCL device filter\n   ](https://intel.github.io/llvm-docs/EnvironmentVariables.html#sycl_device_filter)\n   string. The dpcpp, dpnp, and numba_dpex frameworks support the\n   sycl_device option.\n\n   Here is an example:\n\n    ```shell\n    dpbench -b black_scholes -i dpnp run --sycl-device=level_zero:gpu:0\n    ```\n\n4. All available options are available using `dpbench --help` and `dpbench \u003ccommand\u003e --help`:\n\n    ```\n    usage: dpbench [-h] [-b [BENCHMARKS]] [-i [IMPLEMENTATIONS]] [-a | --all-implementations | --no-all-implementations] [--version] [-r [RUN_ID]] [--last-run | --no-last-run] [-d [RESULTS_DB]]\n               [--log-level [{critical,fatal,error,warning,info,debug}]]\n               {run,report,config} ...\n\n    positional arguments:\n    {run,report,config}\n\n    options:\n    -h, --help            show this help message and exit\n    -b [BENCHMARKS], --benchmarks [BENCHMARKS]\n                            Comma separated list of benchmarks. Leave empty to load all benchmarks.\n    -i [IMPLEMENTATIONS], --implementations [IMPLEMENTATIONS]\n                            Comma separated list of implementations. Use --all-implementations to load all available implementations.\n    -a, --all-implementations, --no-all-implementations\n                            If set, all available implementations will be loaded.\n    --version             show program's version number and exit\n    -r [RUN_ID], --run-id [RUN_ID]\n                            run_id to perform actions on. Use --last-run to use latest available run, or leave empty to create new one.\n    --last-run, --no-last-run\n                            Sets run_id to the latest run_id from the database.\n    -d [RESULTS_DB], --results-db [RESULTS_DB]\n                            Path to a database to store results.\n    --log-level [{critical,fatal,error,warning,info,debug}]\n                            Log level.\n    ```\n\n    ```\n    usage: dpbench run [-h] [-p [{S,M16Gb,M,L}]] [-s | --validate | --no-validate] [--dpbench | --no-dpbench] [--experimental-npbench | --no-experimental-npbench] [--experimental-polybench | --no-experimental-polybench]\n                   [--experimental-rodinia | --no-experimental-rodinia] [-r [REPEAT]] [-t [TIMEOUT]] [--precision [{single,double}]] [--print-results | --no-print-results] [--save | --no-save] [--sycl-device [SYCL_DEVICE]]\n                   [--skip-expected-failures | --no-skip-expected-failures]\n\n    Subcommand to run benchmark executions.\n\n    options:\n    -h, --help            show this help message and exit\n    -p [{S,M16Gb,M,L}], --preset [{S,M16Gb,M,L}]\n                            Preset to use for benchmark execution.\n    -s, --validate, --no-validate\n                            Set if the validation will be run for each benchmark.\n    --dpbench, --no-dpbench\n                            Set if run dpbench benchmarks.\n    --experimental-npbench, --no-experimental-npbench\n                            Set if run npbench benchmarks.\n    --experimental-polybench, --no-experimental-polybench\n                            Set if run polybench benchmarks.\n    --experimental-rodinia, --no-experimental-rodinia\n                            Set if run rodinia benchmarks.\n    -r [REPEAT], --repeat [REPEAT]\n                            Number of repeats for each benchmark.\n    -t [TIMEOUT], --timeout [TIMEOUT]\n                            Timeout time in seconds for each benchmark execution.\n    --precision [{single,double}]\n                            Data precision to use for array initialization.\n    --print-results, --no-print-results\n                            Show the result summary or not\n    --save, --no-save     Either to save execution into database.\n    --sycl-device [SYCL_DEVICE]\n                            Sycl device to overwrite for framework configurations.\n    --skip-expected-failures, --no-skip-expected-failures\n                            Either to save execution into database.\n    ```\n\n    ```\n    usage: dpbench report [--comparisons [COMPARISON_PAIRS]] [--csv]\n\n    Subcommand to generate a summary report from the local DB\n\n    options:\n    -c, --comparisons [COMPARISON_PAIRS]\n                            Comma separated list of implementation pairs to be compared\n    --csv\n                            Sets the general summary report to output in CSV format (default: False)\n    ```\n\n### Performance Measurement\n\nFor each benchmark, we measure the execution time of the\ncomputationally intesive part, but not the intialization or\nshutdown. We provide three inputs (a.k.a presets) for each benchmark.\n\n* **S** - Minimal input to verify that programs are executable\n* **M** - Medium-sized input for performance measurements on client devices\n* **L** - Large-sized input for performance measurements on servers\n\nAs a rough guideline for selecting input sizes, **S** inputs need to\nbe small enough for python and numpy implementations to execute in\n\u003c100ms. **M** and **L** inputs need to be large enough to obtain\nuseful performance insights on client and servers devices,\nrespectively. Also, note that the python and numpy implementations are\nnot expected to work with **M** and **L** inputs.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fintelpython%2Fdpbench","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fintelpython%2Fdpbench","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fintelpython%2Fdpbench/lists"}