{"id":18400678,"url":"https://github.com/databricks/spark-perf","last_synced_at":"2025-09-09T21:58:15.324Z","repository":{"id":66099411,"uuid":"20941526","full_name":"databricks/spark-perf","owner":"databricks","description":"Performance tests for Apache Spark","archived":false,"fork":false,"pushed_at":"2018-07-09T22:30:16.000Z","size":3599,"stargazers_count":380,"open_issues_count":39,"forks_count":201,"subscribers_count":47,"default_branch":"master","last_synced_at":"2025-08-29T01:26:26.657Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"Scala","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/databricks.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2014-06-17T22:25:40.000Z","updated_at":"2025-05-16T15:42:54.000Z","dependencies_parsed_at":null,"dependency_job_id":"73802424-48da-4113-81f7-41f7e9426fa8","html_url":"https://github.com/databricks/spark-perf","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/databricks/spark-perf","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/databricks%2Fspark-perf","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/databricks%2Fspark-perf/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/databricks%2Fspark-perf/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/databricks%2Fspark-perf/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/databricks","download_url":"https://codeload.github.com/databricks/spark-perf/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/databricks%2Fspark-perf/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":274367808,"owners_count":25272302,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-09-09T02:00:10.223Z","response_time":80,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-11-06T02:35:59.163Z","updated_at":"2025-09-09T21:58:15.281Z","avatar_url":"https://github.com/databricks.png","language":"Scala","readme":"# Spark Performance Tests\n\n[![Build Status](https://travis-ci.org/databricks/spark-perf.svg?branch=master)](https://travis-ci.org/databricks/spark-perf)\n\nThis is a performance testing framework for [Apache Spark](http://spark.apache.org) 1.0+.\n\n## Features\n\n- Suites of performance tests for Spark, PySpark, Spark Streaming, and MLlib.\n- Parameterized test configurations:\n   - Sweeps sets of parameters to test against multiple Spark and test configurations.\n- Automatically downloads and builds Spark:\n   - Maintains a cache of successful builds to enable rapid testing against multiple Spark versions.\n- [...]\n\nFor questions, bug reports, or feature requests, please [open an issue on GitHub](https://github.com/databricks/spark-perf/issues).\n\n## Coverage\n\n- Spark Core RDD\n  - list coming soon\n- SQL and DataFrames\n  - coming soon\n- Machine Learning\n  - glm-regression: Generalized Linear Regression Model\n  - glm-classification: Generalized Linear Classification Model\n  - naive-bayes: Naive Bayes\n  - naive-bayes-bernoulli: Bernoulli Naive Bayes\n  - decision-tree: Decision Tree\n  - als: Alternating Least Squares\n  - kmeans: K-Means clustering\n  - gmm: Gaussian Mixture Model\n  - svd: Singular Value Decomposition\n  - pca: Principal Component Analysis\n  - summary-statistics: Summary Statistics (min, max, ...)\n  - block-matrix-mult: Matrix Multiplication\n  - pearson: Pearson's Correlation\n  - spearman: Spearman's Correlation\n  - chi-sq-feature/gof/mat: Chi-square Tests\n  - word2vec: Word2Vec distributed presentation of words\n  - fp-growth: FP-growth frequent item sets\n  - python-glm-classification: Generalized Linear Classification Model\n  - python-glm-regression: Generalized Linear Regression Model\n  - python-naive-bayes: Naive Bayes\n  - python-als: Alternating Least Squares\n  - python-kmeans: K-Means clustering\n  - python-pearson: Pearson's Correlation\n  - python-spearman: Spearman's Correlation\n\n\n## Dependencies\n\nThe `spark-perf` scripts require Python 2.7+.  If you're using an earlier version of Python, you may need to install the `argparse` library using `easy_install argparse`.\n\nSupport for automatically building Spark requires Maven.  On `spark-ec2` clusters, this can be installed using the `./bin/spark-ec2/install-maven` script from this project.\n\n\n## Configuration\n\nTo configure `spark-perf`, copy `config/config.py.template` to `config/config.py` and edit that file.  See `config.py.template` for detailed configuration instructions.  After editing `config.py`, execute `./bin/run` to run performance tests.  You can pass the `--config` option to use a custom configuration file.\n\nThe following sections describe some additional settings to change for certain test environments:\n\n### Running locally\n\n1. Set up local SSH server/keys such that `ssh localhost` works on your machine without a password.\n2. Set config.py options that are friendly for local execution:\n\n   ```\n   SPARK_HOME_DIR = /path/to/your/spark\n   SPARK_CLUSTER_URL = \"spark://%s:7077\" % socket.gethostname()\n   SCALE_FACTOR = .05\n   SPARK_DRIVER_MEMORY = 512m\n   spark.executor.memory = 2g\n   ```\n3. Uncomment at least one `SPARK_TESTS` entry.\n\n### Running on an existing Spark cluster\n1. SSH into the machine hosting the standalone master\n2. Set config.py options:\n\n   ```\n   SPARK_HOME_DIR = /path/to/your/spark/install\n   SPARK_CLUSTER_URL = \"spark://\u003cyour-master-hostname\u003e:7077\"\n   SCALE_FACTOR = \u003cdepends on your hardware\u003e\n   SPARK_DRIVER_MEMORY = \u003cdepends on your hardware\u003e\n   spark.executor.memory = \u003cdepends on your hardware\u003e\n   ```\n3. Uncomment at least one `SPARK_TESTS` entry.\n\n### Running on a spark-ec2 cluster with a custom Spark version\n1. Launch an EC2 cluster with [Spark's EC2 scripts](https://spark.apache.org/docs/latest/ec2-scripts.html).\n2. Set config.py options:\n\n   ```\n   USE_CLUSTER_SPARK = False\n   SPARK_COMMIT_ID = \u003cwhat you want test\u003e\n   SCALE_FACTOR = \u003cdepends on your hardware\u003e\n   SPARK_DRIVER_MEMORY = \u003cdepends on your hardware\u003e\n   spark.executor.memory = \u003cdepends on your hardware\u003e\n   ```\n3. Uncomment at least one `SPARK_TESTS` entry.\n\n\n## License\n\nThis project is licensed under the Apache 2.0 License. See LICENSE for full license text.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdatabricks%2Fspark-perf","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdatabricks%2Fspark-perf","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdatabricks%2Fspark-perf/lists"}