Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/hivemq/hivemq-s3-cluster-discovery-extension
HiveMQ extension for dynamic clustering with AWS S3 discovery
https://github.com/hivemq/hivemq-s3-cluster-discovery-extension
aws-s3 cluster discovery extension hivemq hivemq-extension mqtt s3-bucket scaling
Last synced: about 2 hours ago
JSON representation
HiveMQ extension for dynamic clustering with AWS S3 discovery
- Host: GitHub
- URL: https://github.com/hivemq/hivemq-s3-cluster-discovery-extension
- Owner: hivemq
- License: apache-2.0
- Created: 2018-09-20T07:50:27.000Z (about 6 years ago)
- Default Branch: master
- Last Pushed: 2024-09-14T03:51:10.000Z (about 2 months ago)
- Last Synced: 2024-09-15T15:13:05.577Z (about 2 months ago)
- Topics: aws-s3, cluster, discovery, extension, hivemq, hivemq-extension, mqtt, s3-bucket, scaling
- Language: Java
- Homepage: https://www.hivemq.com/extension/s3-cluster-discovery-extension/
- Size: 438 KB
- Stars: 9
- Watchers: 17
- Forks: 1
- Open Issues: 3
-
Metadata Files:
- Readme: README.adoc
- Contributing: CONTRIBUTING.md
- License: LICENSE
- Code of conduct: CODE_OF_CONDUCT.md
Awesome Lists containing this project
README
:aws_credentials: https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys
:s3_regions: https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
:path-style-access: https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro
:hivemq-extension-downloads: https://www.hivemq.com/extension/s3-cluster-discovery-extension/
:hivemq-cluster-discovery: https://www.hivemq.com/docs/latest/hivemq/cluster.html#discovery
:hivemq-support: http://www.hivemq.com/support/= HiveMQ S3 Cluster Discovery Extension
image:https://img.shields.io/badge/Extension_Type-Integration-orange?style=for-the-badge[Extension Type]
image:https://img.shields.io/github/v/release/hivemq/hivemq-s3-cluster-discovery-extension?style=for-the-badge[GitHub release (latest by date),link=https://github.com/hivemq/hivemq-s3-cluster-discovery-extension/releases/latest]
image:https://img.shields.io/github/license/hivemq/hivemq-s3-cluster-discovery-extension?style=for-the-badge&color=brightgreen[GitHub,link=LICENSE]
image:https://img.shields.io/github/actions/workflow/status/hivemq/hivemq-s3-cluster-discovery-extension/check.yml?branch=master&style=for-the-badge[GitHub Workflow Status,link=https://github.com/hivemq/hivemq-s3-cluster-discovery-extension/actions/workflows/check.yml?query=branch%3Amaster]== Purpose
This HiveMQ extension allows your HiveMQ cluster nodes to discover each other dynamically by regularly exchanging their information via S3 from Amazon Web Services (AWS).
HiveMQ instances are added at runtime as soon as they become available by placing their information, on how to connect to them, to the configured bucket.
The extension will regularly check the configured S3 bucket for files from other HiveMQ nodes.
Additionally, every broker updates its own file on a regular basis to prevent the file from expiring.== Installation
* Download the extension from the {hivemq-extension-downloads}[HiveMQ Marketplace^].
* Copy the content of the zip file to the `extensions` folder of your HiveMQ nodes.
* Modify the `s3discovery.properties` file for your needs.
* Change the {hivemq-cluster-discovery}[discovery mechanism^] of HiveMQ to `extension`.== Configuration
The information each node writes into the bucket consists of an ip-address and a port.
The ip-address and port are taken from the `external-address` and `external-port` which is configured in the cluster `transport` (config.xml).
If they are not set, the `bind-address` and `bind-port` will be used.The `s3discovery.properties` can be reloaded during runtime.
=== General Configuration
|===
| Config name | Required | Description
| s3-bucket-name | x | Name of the S3 bucket to use.
| s3-bucket-region | x | The region in which this S3 bucket resides. (List of regions: {s3_regions}[AWS documentation^])
| file-prefix | x | Prefix for the filename of every node's file.
| file-expiration | x | Timeout in seconds after a file on S3 will be removed.
| update-interval | x | Interval in seconds in which the own information will be updated. (Must be smaller than `file-expiration`)
| s3-endpoint | | Endpoint url to use other S3 compatible storage services.
| s3-endpoint-region | | The region of the endpoint. (Optional)
| s3-path-style-access | | De-/activate path style access. Information about path style access can be found in the {path-style-access}[AWS documentation^].
|===.Example Configuration
[source]
----
s3-bucket-region:us-east-1
s3-bucket-name:hivemq
file-prefix:hivemq/cluster/nodes/
file-expiration:360
update-interval:180
credentials-type:default
----=== Authentication Configuration
The extension uses AWS API operations for exchanging cluster information.
These operations work with `Access Keys`, if you don't know what access keys are or how to generate them please look at the official {aws_credentials}[AWS documentation] about credentials.==== Default Authentication
Default Authentication tries to access S3 via the default mechanisms in the following order:
1. Java system properties
2. Environment variables
3. User credentials file
4. IAM profiles assigned to EC2 instance.Example Default Config
[source]
----
credentials-type:default
----==== Java System Property Authentication
Use Java system properties to specify your AWS credentials the following Java system properties need to be set:
* aws.accessKeyId
* aws.secretAccessKeyYou can for example extend the `run.sh` or `run.bat` in order to start HiveMQ with the system properties:
.Example for extending `run.sh`
[source]
----
############## VARIABLES
JAVA_OPTS="$JAVA_OPTS -Daws.accessKeyId= -Daws.secretAccessKey="
----In case you want to use a temporary session you also need to include a session token:
* aws.sessionToken
.Example for extending `run.sh` for a temporary session
[source]
----
############## VARIABLES
JAVA_OPTS="$JAVA_OPTS -Daws.accessKeyId= -Daws.secretAccessKey= -Daws.sessionToken="
----.Example Java System Properties Config
[source]
----
credentials-type:java_system_properties
----==== Environment Variables Authentication
Use environment variables to specify your AWS credentials the following variables need to be set:
* AWS_ACCESS_KEY_ID
* AWS_SECRET_ACCESS_KEY.Linux example
[source,bash]
----
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
----In case you want to use a temporary session you also need to include a session token:
* AWS_SESSION_TOKEN
.Linux example
[source,bash]
----
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
export AWS_SESSION_TOKEN=
----.Example Environment Variables Config
[source]
----
credentials-type:environment_variables
----==== User Credentials Authentication
Use the default credentials file which can be created by calling 'aws configure' (AWS CLI).
This file is usually located at `~/.aws/credentials` (platform dependent).
The location of the file can be configured by setting the environment variable `AWS_CREDENTIAL_PROFILE_FILE` to the location of your file..Example Java System Properties Config
[source]
----
credentials-type:user_credentials_file
----==== Instance Profile Credentials Authentication
Use IAM Roles assigned to the EC2 instance running HiveMQ to access S3 in order to authenticate.
WARNING: This only works if HiveMQ is running on an EC2 instance and your EC2 instance has configured the right IAM Role to access S3!
.Example Instance Profile Credentials Config
[source]
----
credentials-type:instance_profile_credentials
----==== Access Key Authentication
Use the credentials specified in the `s3discovery.properties` file to authenticate.
The variables you must provide are:
* `credentials-access-key-id`
* `credentials-secret-access-key`.Example Instance Profile Credentials Config
[source]
----
credentials-type:access_key
credentials-access-key-id:
credentials-secret-access-key:
----==== Temporary Session Authentication
Use the credentials specified in `s3discovery.properties` file to authenticate with a temporary session.
The variables you must provide are:
* `credentials-access-key-id`
* `credentials-secret-access-key`
* `credentials-session-token`.Example Instance Profile Credentials Config
[source]
----
credentials-type:temporary_session
credentials-access-key-id:
credentials-secret-access-key:
credentials-session-token:
----== Metrics
The S3 cluster discovery extension delivers a set of metrics that can be used to monitor the behavior in a dashboard.
The following metrics are available:
These two counter metrics indicate a successful or failed S3 query attempt in order to receive the IP addresses of cluster members:
----
com.hivemq.extensions.cluster.discovery.s3.query.success.count
com.hivemq.extensions.cluster.discovery.s3.query.failed.count
----This gauge shows the number of currently found cluster member IP addresses that were received during the last S3 query:
----
com.hivemq.extensions.cluster.discovery.s3.resolved-addresses
----== First Steps
* Create an S3 bucket with the configured name.
* Verify that the given authentication can access the S3 bucket.
* Start HiveMQ which will start discover other nodes via S3.== Need Help?
If you encounter any problems, we are happy to help.
The best place to get in contact is our {hivemq-support}[support^].== Contributing
If you want to contribute to HiveMQ S3 Cluster Discovery Extension, see the link:CONTRIBUTING.md[contribution guidelines].
== License
HiveMQ S3 Cluster Discovery Extension is licensed under the `APACHE LICENSE, VERSION 2.0`.
A copy of the license can be found link:LICENSE[here].