{"id":30294477,"url":"https://github.com/linkedin/dynamometer","last_synced_at":"2025-08-17T01:35:15.159Z","repository":{"id":29262499,"uuid":"109754092","full_name":"linkedin/dynamometer","owner":"linkedin","description":"A tool for scale and performance testing of HDFS with a specific focus on the NameNode.","archived":false,"fork":false,"pushed_at":"2024-01-11T09:34:38.000Z","size":304,"stargazers_count":129,"open_issues_count":21,"forks_count":36,"subscribers_count":18,"default_branch":"master","last_synced_at":"2024-04-13T23:22:25.945Z","etag":null,"topics":["hadoop","hadoop-filesystem","hadoop-framework","hadoop-hdfs","hdfs","hdfs-dfs","performance-analysis","performance-metrics","performance-test","performance-testing","scale","scale-up","testing","testing-tools"],"latest_commit_sha":null,"homepage":"","language":"Java","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"bsd-2-clause","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/linkedin.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2017-11-06T21:51:25.000Z","updated_at":"2024-04-02T17:43:24.000Z","dependencies_parsed_at":"2022-07-25T00:02:06.377Z","dependency_job_id":null,"html_url":"https://github.com/linkedin/dynamometer","commit_stats":null,"previous_names":[],"tags_count":8,"template":false,"template_full_name":null,"purl":"pkg:github/linkedin/dynamometer","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/linkedin%2Fdynamometer","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/linkedin%2Fdynamometer/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/linkedin%2Fdynamometer/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/linkedin%2Fdynamometer/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/linkedin","download_url":"https://codeload.github.com/linkedin/dynamometer/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/linkedin%2Fdynamometer/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":270796215,"owners_count":24647319,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-08-16T02:00:11.002Z","response_time":91,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["hadoop","hadoop-filesystem","hadoop-framework","hadoop-hdfs","hdfs","hdfs-dfs","performance-analysis","performance-metrics","performance-test","performance-testing","scale","scale-up","testing","testing-tools"],"created_at":"2025-08-17T01:35:10.708Z","updated_at":"2025-08-17T01:35:15.150Z","avatar_url":"https://github.com/linkedin.png","language":"Java","readme":"# Dynamometer [![Build Status](https://travis-ci.org/linkedin/dynamometer.svg?branch=master)](https://travis-ci.org/linkedin/dynamometer)\n\n## Dynamometer in Hadoop\n\nPlease be aware that Dynamometer has now been committed into Hadoop itself in the JIRA ticket [HDFS-12345](https://issues.apache.org/jira/browse/HDFS-12345). It is located under the `hadoop-tools/hadoop-dynamometer` submodule. This GitHub project will continue to be maintained for testing\nagainst the `2.x` release line of Hadoop, but all versions of Dynamometer which work with Hadoop 3 will\nonly appear in Hadoop, and future development will primarily occur there.\n\n## Overview\n\nDynamometer is a tool to performance test Hadoop's HDFS NameNode. The intent is to provide a\nreal-world environment by initializing the NameNode against a production file system image and replaying\na production workload collected via e.g. the NameNode's audit logs. This allows for replaying a workload\nwhich is not only similar in characteristic to that experienced in production, but actually identical.\n\nDynamometer will launch a YARN application which starts a single NameNode and a configurable number of\nDataNodes, simulating an entire HDFS cluster as a single application. There is an additional `workload`\njob run as a MapReduce job which accepts audit logs as input and uses the information contained within to\nsubmit matching requests to the NameNode, inducing load on the service.\n\nDynamometer can execute this same workload against different Hadoop versions or with different\nconfigurations, allowing for the testing of configuration tweaks and code changes at scale without the\nnecessity of deploying to a real large-scale cluster.\n\nThroughout this documentation, we will use \"Dyno-HDFS\", \"Dyno-NN\", and \"Dyno-DN\" to refer to the HDFS\ncluster, NameNode, and DataNodes (respectively) which are started _inside of_ a Dynamometer application.\nTerms like HDFS, YARN, and NameNode used without qualification refer to the existing infrastructure on\ntop of which Dynamometer is run.\n\n## Requirements\n\nDynamometer is based around YARN applications, so an existing YARN cluster will be required for execution.\nIt also requires an accompanying HDFS instance to store some temporary files for communication.\n\nPlease be aware that Dynamometer makes certain assumptions about HDFS, and thus only works with certain\nversions. As discussed at the start of this README, this project only works with Hadoop 2; support for\nHadoop 3 is introduced in the version of Dynamometer within the Hadoop repository. Below is a list of known\nsupported versions of Hadoop which are compatible with Dynamometer:\n* Hadoop 2.7 starting at 2.7.4\n* Hadoop 2.8 starting at 2.8.4\n\nHadoop 2.8.2 and 2.8.3 are compatible as a cluster version on which to run Dynamometer, but are not supported as a version-under-test.\n\n## Building\n\nDynamometer consists of three main components:\n* Infrastructure: This is the YARN application which starts a Dyno-HDFS cluster.\n* Workload: This is the MapReduce job which replays audit logs.\n* Block Generator: This is a MapReduce job used to generate input files for each Dyno-DN; its\n  execution is a prerequisite step to running the infrastructure application.\n\nThey are built through standard [Gradle](https://gradle.org/) means, i.e. `./gradlew build`. This\nproject uses the [Gradle wrapper](https://docs.gradle.org/current/userguide/gradle_wrapper.html/)\nIn addition to compiling everything, this will generate a distribution tarball, containing all\nnecessary components for an end user, at\n`build/distributions/dynamometer-VERSION.tar` (a zip is also generated; their contents are identical).\nThis distribution does not contain any Hadoop dependencies, which are necessary to launch the application,\nas it assumes Dynamometer will be run from a machine which has a working installation of Hadoop. To\ninclude Dynamometer's Hadoop dependencies, use `build/distributions/dynamometer-fat-VERSION.tar`.\n\n## Usage\n\nScripts discussed below can be found in the `bin` directory of the distribution. The corresponding\nJava JAR files can be found in the `lib` directory.\n\n### Preparing Requisite Files\n\nA number of steps are required in advance of starting your first Dyno-HDFS cluster:\n\n* Collect an fsimage and related files from your NameNode. This will include the `fsimage_TXID` file\n  which the NameNode creates as part of checkpointing, the `fsimage_TXID.md5` containing the md5 hash\n  of the image, the `VERSION` file containing some metadata, and the `fsimage_TXID.xml` file which can\n  be generated from the fsimage using the offline image viewer:\n  ```\n  hdfs oiv -i fsimage_TXID -o fsimage_TXID.xml -p XML\n  ```\n  It is recommended that you collect these files from your Secondary/Standby NameNode if you have one\n  to avoid placing additional load on your Active NameNode.\n\n  All of these files must be placed somewhere on HDFS where the various jobs will be able to access them.\n  They should all be in the same folder, e.g. `hdfs:///dyno/fsimage`.\n\n  All these steps can be automated with the `upload-fsimage.sh` script, e.g.:\n  ```\n  ./bin/upload-fsimage.sh 0001 hdfs:///dyno/fsimage\n  ```\n  Where 0001 is the transaction ID of the desired fsimage. See usage info of the script for more detail.\n* Collect the Hadoop distribution tarball to use to start the Dyno-NN and -DNs. For example, if\n  testing against Hadoop 2.7.4, use\n  [hadoop-2.7.4.tar.gz](http://www.apache.org/dyn/closer.cgi/hadoop/common/hadoop-2.7.4/hadoop-2.7.4.tar.gz).\n  This distribution contains several components unnecessary for Dynamometer (e.g. YARN), so to reduce\n  its size, you can optionally use the `create-slim-hadoop-tar.sh` script:\n  ```\n  ./bin/create-slim-hadoop-tar.sh hadoop-VERSION.tar.gz\n  ```\n  The Hadoop tar can be present on HDFS or locally where the client will be run from. Its path will be\n  supplied to the client via the `-hadoop_binary_path` argument.\n\n  Alternatively, if you use the `-hadoop_version` argument, you can simply specify which version you would\n  like to run against (e.g. '2.7.4') and the client will attempt to download it automatically from an\n  Apache mirror. See the usage information of the client for more details.\n* Prepare a configuration directory. You will need to specify a configuration directory with the standard\n  Hadoop configuration layout, e.g. it should contain `etc/hadoop/*-site.xml`. This determines with what\n  configuration the Dyno-NN and -DNs will be launched. Configurations that must be modified for\n  Dynamometer to work properly (e.g. `fs.defaultFS` or `dfs.namenode.name.dir`) will be overridden\n  at execution time. This can be a directory if it is available locally, else an archive file on local\n  or remote (HDFS) storage.\n\n### Execute the Block Generation Job\n\nThis will use the `fsimage_TXID.xml` file to generate the list of blocks that each Dyno-DN should\nadvertise to the Dyno-NN. It runs as a MapReduce job.\n```\n./bin/generate-block-lists.sh\n    -fsimage_input_path hdfs:///dyno/fsimage/fsimage_TXID.xml\n    -block_image_output_dir hdfs:///dyno/blocks\n    -num_reducers R\n    -num_datanodes D\n```\nIn this example, the XML file uploaded above is used to generate block listings into `hdfs:///dyno/blocks`.\n`R` reducers are used for the job, and `D` block listings are generated - this will determine how many\nDyno-DNs are started in the Dyno-HDFS cluster.\n\n### Prepare Audit Traces (Optional)\n\nThis step is only necessary if you intend to use the audit trace replay capabilities of Dynamometer; if you\njust intend to start a Dyno-HDFS cluster you can skip to the next section.\n\nThe audit trace replay accepts one input file per mapper, and currently supports two input formats, configurable\nvia the `auditreplay.command-parser.class` configuration.\n\nThe default is a direct format,\n`com.linkedin.dynamometer.workloadgenerator.audit.AuditLogDirectParser`. This accepts files in the format produced\nby a standard configuration audit logger, e.g. lines like:\n```\n1970-01-01 00:00:42,000 INFO FSNamesystem.audit: allowed=true\tugi=hdfs\tip=/127.0.0.1\tcmd=open\tsrc=/tmp/foo\tdst=null\tperm=null\tproto=rpc\n```\nWhen using this format you must also specify `auditreplay.log-start-time.ms`, which should be (in milliseconds since\nthe Unix epoch) the start time of the audit traces. This is needed for all mappers to agree on a single start time. For\nexample, if the above line was the first audit event, you would specify `auditreplay.log-start-time.ms=42000`.\n\nThe other supporter format is `com.linkedin.dynamometer.workloadgenerator.audit.AuditLogHiveTableParser`. This accepts\nfiles in the format produced by a Hive query with output fields, in order:\n\n* `relativeTimestamp`: event time offset, in milliseconds, from the start of the trace\n* `ugi`: user information of the submitting user\n* `command`: name of the command, e.g. 'open'\n* `source`: source path\n* `dest`: destination path\n* `sourceIP`: source IP of the event\n\nAssuming your audit logs are available in Hive, this can be produced via a Hive query looking like:\n```sql\nINSERT OVERWRITE DIRECTORY '${outputPath}'\nSELECT (timestamp - ${startTimestamp} AS relativeTimestamp, ugi, command, source, dest, sourceIP\nFROM '${auditLogTableLocation}'\nWHERE timestamp \u003e= ${startTimestamp} AND timestamp \u003c ${endTimestamp}\nDISTRIBUTE BY src\nSORT BY relativeTimestamp ASC;\n```\n\n### Start the Infrastructure Application \u0026 Workload Replay\n\nAt this point you're ready to start up a Dyno-HDFS cluster and replay some workload against it! Note that the\noutput from the previous two steps can be reused indefinitely.\n\nThe client which launches the Dyno-HDFS YARN application can optionally launch the workload replay\njob once the Dyno-HDFS cluster has fully started. This makes each replay into a single execution of the client,\nenabling easy testing of various configurations. You can also launch the two separately to have more control.\nSimilarly, it is possible to launch Dyno-DNs for an external NameNode which is not controlled by Dynamometer/YARN.\nThis can be useful for testing NameNode configurations which are not yet supported (e.g. HA NameNodes). You can do\nthis by passing the `-namenode_servicerpc_addr` argument to the infrastructure application with a value that points\nto an external NameNode's service RPC address.\n\n#### Manual Workload Launch\n\nFirst launch the infrastructure application to begin the startup of the internal HDFS cluster, e.g.:\n```\n./bin/start-dynamometer-cluster.sh\n    -hadoop_binary_path hadoop-2.7.4.tar.gz\n    -conf_path my-hadoop-conf\n    -fs_image_dir hdfs:///fsimage\n    -block_list_path hdfs:///dyno/blocks\n```\nThis demonstrates the required arguments. You can run this with the `-help` flag to see further usage information.\n\nThe client will track the Dyno-NN's startup progress and how many Dyno-DNs it considers live. It will notify\nvia logging when the Dyno-NN has exited safemode and is ready for use.\n\nAt this point, a workload job (map-only MapReduce job) can be launched, e.g.:\n```\n./bin/start-workload.sh\n    -Dauditreplay.input-path=hdfs:///dyno/audit_logs/\n    -Dauditreplay.output-path=hdfs:///dyno/results/\n    -Dauditreplay.num-threads=50\n    -nn_uri hdfs://namenode_address:port/\n    -start_time_offset 5m\n    -mapper_class_name AuditReplayMapper\n```\nThe type of workload generation is configurable; AuditReplayMapper replays an audit log trace as discussed previously.\nThe AuditReplayMapper is configured via configurations; `auditreplay.input-path`, `auditreplay.output-path` and\n`auditreplay.num-threads` are required to specify the input path for audit log files, the output path for the results,\nand the number of threads per map task. A number of map tasks equal to the number of files in `input-path` will be\nlaunched; each task will read in one of these input files and use `num-threads` threads to replay the events contained\nwithin that file. A best effort is made to faithfully replay the audit log events at the same pace at which they\noriginally occurred (optionally, this can be adjusted by specifying `auditreplay.rate-factor` which is a multiplicative\nfactor towards the rate of replay, e.g. use 2.0 to replay the events at twice the original speed).\n\nThe AuditReplayMapper will output the benchmark results to a file `part-r-00000` in the output directory in CSV format.\nEach line is in the format `user,type,operation,numops,cumulativelatency`, e.g. `hdfs,WRITE,MKDIRS,2,150`. \n\n#### Integrated Workload Launch\n\nTo have the infrastructure application client launch the workload automatically, parameters for the workload job\nare passed to the infrastructure script. Only the AuditReplayMapper is supported in this fashion at this time. To\nlaunch an integrated application with the same parameters as were used above, the following can be used:\n```\n./bin/start-dynamometer-cluster.sh\n    -hadoop_binary hadoop-2.7.4.tar.gz\n    -conf_path my-hadoop-conf\n    -fs_image_dir hdfs:///fsimage\n    -block_list_path hdfs:///dyno/blocks\n    -workload_replay_enable\n    -workload_input_path hdfs:///dyno/audit_logs/\n    -workload_output_path hdfs:///dyno/results/\n    -workload_threads_per_mapper 50\n    -workload_start_delay 5m\n```\nWhen run in this way, the client will automatically handle tearing down the Dyno-HDFS cluster once the\nworkload has completed. To see the full list of supported parameters, run this with the `-help` flag.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Flinkedin%2Fdynamometer","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Flinkedin%2Fdynamometer","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Flinkedin%2Fdynamometer/lists"}