{"id":13753306,"url":"https://github.com/softwaremill/mqperf","last_synced_at":"2025-08-22T01:32:38.356Z","repository":{"id":47013273,"uuid":"20400809","full_name":"softwaremill/mqperf","owner":"softwaremill","description":null,"archived":false,"fork":false,"pushed_at":"2024-04-17T12:27:15.000Z","size":962,"stargazers_count":145,"open_issues_count":6,"forks_count":37,"subscribers_count":36,"default_branch":"master","last_synced_at":"2024-12-13T06:50:22.263Z","etag":null,"topics":["ansible","benchmark","docker","message-queue","performance-testing","sbt","scala"],"latest_commit_sha":null,"homepage":"https://softwaremill.com/mqperf/","language":"Scala","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/softwaremill.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2014-06-02T10:50:43.000Z","updated_at":"2024-09-10T09:45:26.000Z","dependencies_parsed_at":"2024-04-17T11:47:32.137Z","dependency_job_id":"2a7568a6-42e2-49b3-9247-68cd433d9e37","html_url":"https://github.com/softwaremill/mqperf","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/softwaremill%2Fmqperf","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/softwaremill%2Fmqperf/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/softwaremill%2Fmqperf/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/softwaremill%2Fmqperf/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/softwaremill","download_url":"https://codeload.github.com/softwaremill/mqperf/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":230547678,"owners_count":18243227,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ansible","benchmark","docker","message-queue","performance-testing","sbt","scala"],"created_at":"2024-08-03T09:01:19.975Z","updated_at":"2024-12-20T07:06:57.643Z","avatar_url":"https://github.com/softwaremill.png","language":"Scala","readme":"# MqPerf\n\nA benchmark of message queues with data replication and at-least-once delivery guarantees.\n\nSource code for the mqperf article at SoftwareMill's blog: [Evaluating persistent, replicated message queues](https://softwaremill.com/mqperf)\n\n# Setting up the environment\n\n### Tools\nTests have been run with the following prerequisites:\n- python 3.9.5 (`via pyenv`)\n- ansible 2.9.5 (`pip install 'ansible==2.9.5'`)\n- boto3 1.17.96 (`pip install boto3`)\n\n### AWS Credentials\nMessage queues and test servers are automatically provisioned using **Ansible** on **AWS**. You will need to have the\n`AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` present in the environment for things to work properly, as well\nas Ansible and Boto installed.\n\nSee [Creating AWS access key](https://aws.amazon.com/premiumsupport/knowledge-center/create-access-key/) for details.\n\n# Message generation notes\n* By default, each message has length of 100 characters (this is configurable)\n* For each test, we generate a pool of 10000 random messages\n* Each batch of messages is constructed using messages from that pool\n* Each message in the batch is modified: 13 characters from the end are replaced with stringified timestamp\n  (TS value is used for measurement on the receiver end)\n\nPlease consider the above when configuring message size parameter in test configuration: ```\"msg_size\": 100```.\nIf message is too short, then majority of its content will be the TS information. For that reason, we suggest\nconfiguring message length at 50+ characters.\n\n# Configuring tests\nTest configurations are located under `ansible/tests`. Each configuration has a number of parameters \nthat may influence the test execution and its results.\n\n# Running tests\n*Note: all commands should be run in the `ansible` directory*\n\n### Provision broker nodes with relevant script\n```shell\nansible-playbook install_and_setup_YourQueueName.yml\n```\n*Note: since **AWS SQS** is a serverless offering, you don't need to setup anything for it. For SQS, you can skip this step.*\n\n*Note: you can select EC2 instance type for your tests by setting `ec2_instance_type` in the `group_vars/all.yml` file*\n \n### Provision sender and receiver nodes\n```shell\nansible-playbook provision_mqperf_nodes.yml\n```\n*Note: you can adjust the number of these **EC2** instances for your own tests.*\n\n**WARNING: after each code change, you'll need to remove the fat-jars from the `target/scala-2.12` directory and re-run \n`provision_mqperf_nodes.yml`.**\n\n### Provision Prometheus and Grafana nodes\n```shell\nansible-playbook install_and_setup_prometheus.yml\n```\n**WARNING: this must be done each time after provisioning new sender / receiver nodes (previous step) so that Prometheus \nis properly configured to scrape the new servers for metrics**\n\n### Monitoring tests\nMetrics are gathered using **Prometheus** and visualized using **Grafana**.\n\nAccessing monitoring dashboard:\n* Lookup the *public* IP address of the EC2 node where metric tools have been deployed.\n* Open `IP:3000/dashboards` in your browser\n* Login with `admin/pass` credentials\n* Select `MQPerf Dashboard`\n\n### Execute test\n* Choose your test configuration from the `tests` directory\n* Use the file name as the `test_name` in the `run_tests.yml` file\n* Run the command\n```shell\nansible-playbook run_tests.yml\n```\n\n# Cleaning up\nThere are few commands dedicated to cleaning up the cloud resources after the tests execution.\n\n* Stopping sender and receiver processing\n```shell\nansible-playbook stop.yml\n```\n\n* Terminating EC2 instances\n```shell\nansible-playbook shutdown_ec2_instances.yml\n```\n\n* Removing all MQPerf-related resources on AWS\n```shell\nansible-playbook remove_aws_resources.yml\n```\n\n# Utilities\n* Checking receiver/sender status\n```shell\nansible-playbook check_status.yml\n```\n\n* Running sender nodes only\n```shell\nansible-playbook sender_only.yml\n```\n\n* Running receiver nodes only\n```shell\nansible-playbook receiver_only.yml\n```\n\n# Implementation-specific notes\n\n## Kafka\nBefore running the tests, create the Kafka topics by running `ansible-playbook kafka_create_topic.yml`\n\n## Redpanda [vectorized.io]\nRedpanda requires xfs filesystem, to configure it update `storage_fs_type: xfs` in `all.yml` file.\nBefore running the tests, create the Redpanda topics by running `ansible-playbook redpanda_create_topic.yml`.\nDefault partition number in a topic creation script is 64, if you need to adjust it update `--partitions 64` param in `redpanda_create_topic.yml` script.\n\n##Redis Streams\nBefore running the tests, create the required streams and consumer groups by running `ansible-playbook redistreams_create_streams.yml`.\nThis script creates streams named stream0, stream1... stream100. If you need more streams please edit the loop counter.\n\nIf you'd like to rerun tests without cluster redeployment use `ansible-playbook redistreams_trim_streams.yml` to flush the streams.\nTo manipulate streams count use streamCount property in test JSON.\nNotes. Cluster create command (last step) sometimes fails randomly. It's sometimes easier to run it directly from ec2.\n\n## Pulsar\nThe ack property is set on the Bookkeeper level via the CLI or REST or a startup parameter. \n[Go to the docs](https://pulsar.apache.org/docs/en/administration-zk-bk/#bookkeeper-persistence-policies) for more details.\nCurrently, this is not implemented, hence the `mq.ack` attribute is ignored.\n\n## RabbitMQ\n* when installing Rabbit MQ, you need to specify the Erlang cookie, e.g.: \n`ansible-playbook install_and_setup_rabbitmq.yml -e erlang_cookie=1234`\n* management console is available on port 15672 (`guest`/`guest`)     \n* if you'd like to SSH to the broker servers the user is `centos`\n* queues starting with `ha.` will be mirrored\n\n## ActiveMQ\n* management console is available on port 8161 (`admin`/`admin`)\n\n## ActiveMQ Artemis\n* note that for the client code, we are using the same one as for ActiveMQ (`ActiveMq.scala`)\n* there is no dedicated management console for ActiveMQ Artemis, however monitoring is possible via exposed [Jolokia](https://jolokia.org/) web app. Jolokia web application is deployed along ActiveMQ Artemis by default. To view broker's data:\n    * Navigate to: `http://\u003cAWS_EC2_PUBLIC_IP\u003e:8161/jolokia/list` - plain JSON content should be visible - to verify if it works.\n    * To view instance's state navigate to e.g.: `http://\u003cAWS_EC2_PUBLIC_IP\u003e:8161/jolokia/read/org.apache.activemq.artemis:address=\"mq\",broker=\"\u003cBROKER_NAME\u003e\",component=addresses`, where: `org.apache.activemq.artemis:address=\"mq\",broker=\"\u003cBROKER_NAME\u003e\",component=addresses` is the key (`\"` signs are obligatory). To know other keys refer to the previous step. \n    * `\u003cBROKER_NAME\u003e` typically resolves to AWS_EC2_PRIVATE_IP with `.` replaced with `_`.\n* configuration changes: bumped Xmx, bumped global-max-size    \n    \n## EventStore\n* configuration changes: see the `EventStoreMq` implementation\n\n## Oracle AQ support\n* to build the oracleaq module, first install the required dependencies available in your Oracle DB installation\n    * aqapi.jar (oracle/product/11.2.0/dbhome_1/rdbms/jlib/aqapi.jar)\n    * ojdbc6.jar (oracle/product/11.2.0/dbhome_1/jdbc/lib/ojdbc6.jar)\n\n* to install a dependency in your local repository, create a build.sbt file:\n```\norganization := \"com.oracle\"\nname := \"ojdbc6\"\nversion := \"1.0.0\"\nscalaVersion := \"2.11.6\"\npackageBin in Compile := file(s\"${name.value}.jar\")\n```\nNow you can publish the file. It should be available in ~/.ivy2/local/com.oracle/\n```sh\n$ sbt publishLocal\n```\n\n# Ansible notes\nZookeeper installation contains an ugly workaround for a bug in Cloudera's RPM repositories \n(http://community.cloudera.com/t5/Cloudera-Manager-Installation/cloudera-manager-installer-fails-on-centos-7-3-vanilla/td-p/55086/highlight/true).\nSee `ansible/roles/zookeeper/tasks/main.yml`. This should be removed in the future when the bug is fixed by Cloudera.\n\n# FAQ\n- I'm getting: *skipping: no hosts matched*, why? Probably you are running ansible from project root.\n  Instead `cd` to `ansible/` (where `ansible.cfg` is located) and try to run playbook from this location.\n\n# Local test\nTo run locally execute the Sender and Receiver classes with following:\n- parameters:\n\n`-Dconfig.file=/tmp/test-config.json`\n\n- environment variables:\n\n`RUN_ID=1;HOST_ID=1`\n","funding_links":[],"categories":["scala"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsoftwaremill%2Fmqperf","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fsoftwaremill%2Fmqperf","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsoftwaremill%2Fmqperf/lists"}