{"id":20433749,"url":"https://github.com/intelpython/scikit-learn_bench","last_synced_at":"2025-04-04T08:04:03.890Z","repository":{"id":41080883,"uuid":"151195504","full_name":"IntelPython/scikit-learn_bench","owner":"IntelPython","description":"scikit-learn_bench benchmarks various implementations of machine learning algorithms across data analytics frameworks. It currently support the scikit-learn, DAAL4PY, cuML, and XGBoost frameworks for commonly used machine learning algorithms.","archived":false,"fork":false,"pushed_at":"2025-03-25T17:01:17.000Z","size":893,"stargazers_count":115,"open_issues_count":15,"forks_count":72,"subscribers_count":11,"default_branch":"main","last_synced_at":"2025-03-25T18:21:55.218Z","etag":null,"topics":["benchmarks","daal4py","hacktoberfest","machine-learning","machine-learning-benchmarks","scikit-learn-benchmarks"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/IntelPython.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":".github/CODEOWNERS","security":"SECURITY.md","support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2018-10-02T03:30:53.000Z","updated_at":"2025-03-20T13:00:42.000Z","dependencies_parsed_at":"2023-02-17T07:01:04.712Z","dependency_job_id":"900f6167-0b54-4dd6-bbdd-9df3bfd6d12f","html_url":"https://github.com/IntelPython/scikit-learn_bench","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/IntelPython%2Fscikit-learn_bench","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/IntelPython%2Fscikit-learn_bench/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/IntelPython%2Fscikit-learn_bench/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/IntelPython%2Fscikit-learn_bench/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/IntelPython","download_url":"https://codeload.github.com/IntelPython/scikit-learn_bench/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247137058,"owners_count":20889798,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["benchmarks","daal4py","hacktoberfest","machine-learning","machine-learning-benchmarks","scikit-learn-benchmarks"],"created_at":"2024-11-15T08:20:51.735Z","updated_at":"2025-04-04T08:04:03.867Z","avatar_url":"https://github.com/IntelPython.png","language":"Python","readme":"# Machine Learning Benchmarks\n\n[![Build Status](https://dev.azure.com/daal/scikit-learn_bench/_apis/build/status/IntelPython.scikit-learn_bench?branchName=main)](https://dev.azure.com/daal/scikit-learn_bench/_build/latest?definitionId=8\u0026branchName=main)\n\n**Scikit-learn_bench** is a benchmark tool for libraries and frameworks implementing Scikit-learn-like APIs and other workloads.\n\nBenefits:\n- Full control of benchmarks suite through CLI\n- Flexible and powerful benchmark config structure\n- Available with advanced profiling tools, such as Intel(R) VTune* Profiler\n- Automated benchmarks report generation\n\n### 📜 Table of Contents\n\n- [Machine Learning Benchmarks](#machine-learning-benchmarks)\n  - [🔧 Create a Python Environment](#-create-a-python-environment)\n  - [🚀 How To Use Scikit-learn\\_bench](#-how-to-use-scikit-learn_bench)\n    - [Benchmarks Runner](#benchmarks-runner)\n    - [Report Generator](#report-generator)\n    - [Scikit-learn\\_bench High-Level Workflow](#scikit-learn_bench-high-level-workflow)\n  - [📚 Benchmark Types](#-benchmark-types)\n  - [📑 Documentation](#-documentation)\n\n## 🔧 Create a Python Environment\n\nHow to create a usable Python environment with the following required frameworks:\n\n- **sklearn, sklearnex, and gradient boosting frameworks**:\n\n```bash\n# with pip\npip install -r envs/requirements-sklearn.txt\n# or with conda\nconda env create -n sklearn -f envs/conda-env-sklearn.yml\n```\n\n- **RAPIDS**:\n\n```bash\nconda env create -n rapids --solver=libmamba -f envs/conda-env-rapids.yml\n```\n\n## 🚀 How To Use Scikit-learn_bench\n\n### Benchmarks Runner\n\nHow to run benchmarks using the `sklbench` module and a specific configuration:\n\n```bash\npython -m sklbench --config configs/sklearn_example.json\n```\n\nThe default output is a file with JSON-formatted results of benchmarking cases. To generate a better human-readable report, use the following command:\n\n```bash\npython -m sklbench --config configs/sklearn_example.json --report\n```\n\nBy default, output and report file paths are `result.json` and `report.xlsx`. To specify custom file paths, run:\n\n```bash\npython -m sklbench --config configs/sklearn_example.json --report --result-file result_example.json --report-file report_example.xlsx\n```\n\nFor a description of all benchmarks runner arguments, refer to [documentation](sklbench/runner/README.md#arguments).\n\n### Report Generator\n\nTo combine raw result files gathered from different environments, call the report generator:\n\n```bash\npython -m sklbench.report --result-files result_1.json result_2.json --report-file report_example.xlsx\n```\n\nFor a description of all report generator arguments, refer to [documentation](sklbench/report/README.md#arguments).\n\n### Scikit-learn_bench High-Level Workflow\n\n```mermaid\nflowchart TB\n    A[User] -- High-level arguments --\u003e B[Benchmarks runner]\n    B -- Generated benchmarking cases --\u003e C[\"Benchmarks collection\"]\n    C -- Raw JSON-formatted results --\u003e D[Report generator]\n    D -- Human-readable report --\u003e A\n\n    classDef userStyle fill:#44b,color:white,stroke-width:2px,stroke:white;\n    class A userStyle\n```\n\n## 📚 Benchmark Types\n\n**Scikit-learn_bench** supports the following types of benchmarks:\n\n - **Scikit-learn estimator** - Measures performance and quality metrics of the [sklearn-like estimator](https://scikit-learn.org/stable/glossary.html#term-estimator).\n - **Function** - Measures performance metrics of specified function.\n\n## 📑 Documentation\n[Scikit-learn_bench](README.md):\n- [Configs](configs/README.md)\n- [Benchmarks Runner](sklbench/runner/README.md)\n- [Report Generator](sklbench/report/README.md)\n- [Benchmarks](sklbench/benchmarks/README.md)\n- [Data Processing and Storage](sklbench/datasets/README.md)\n- [Emulators](sklbench/emulators/README.md)\n- [Developer Guide](docs/README.md)\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fintelpython%2Fscikit-learn_bench","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fintelpython%2Fscikit-learn_bench","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fintelpython%2Fscikit-learn_bench/lists"}