{"id":13458620,"url":"https://github.com/leonchen83/redis-rdb-cli","last_synced_at":"2026-03-05T20:04:29.440Z","repository":{"id":38418504,"uuid":"139314347","full_name":"leonchen83/redis-rdb-cli","owner":"leonchen83","description":"Redis rdb CLI  : A CLI tool that can parse, filter, split, merge rdb and analyze memory usage offline. It can also sync 2 redis data and allow user define their own sink service to migrate redis data to somewhere.","archived":false,"fork":false,"pushed_at":"2024-09-03T03:04:56.000Z","size":8103,"stargazers_count":430,"open_issues_count":7,"forks_count":90,"subscribers_count":24,"default_branch":"master","last_synced_at":"2025-03-12T09:37:23.619Z","etag":null,"topics":["analyze","cli","dashboard","memory","migrate","rdb","redis"],"latest_commit_sha":null,"homepage":"","language":"Java","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/leonchen83.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":null,"funding":".github/FUNDING.yml","license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null},"funding":{"custom":["https://www.paypal.com/paypalme/leonchen83"]}},"created_at":"2018-07-01T09:01:53.000Z","updated_at":"2025-03-09T16:10:50.000Z","dependencies_parsed_at":"2023-11-15T03:31:11.800Z","dependency_job_id":"6bfa0e26-fecb-4761-a9c7-7434277928d7","html_url":"https://github.com/leonchen83/redis-rdb-cli","commit_stats":null,"previous_names":[],"tags_count":42,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/leonchen83%2Fredis-rdb-cli","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/leonchen83%2Fredis-rdb-cli/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/leonchen83%2Fredis-rdb-cli/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/leonchen83%2Fredis-rdb-cli/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/leonchen83","download_url":"https://codeload.github.com/leonchen83/redis-rdb-cli/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":245298104,"owners_count":20592541,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["analyze","cli","dashboard","memory","migrate","rdb","redis"],"created_at":"2024-07-31T09:00:54.580Z","updated_at":"2026-03-05T20:04:29.426Z","avatar_url":"https://github.com/leonchen83.png","language":"Java","readme":"# redis-rdb-cli\n\n\u003ca href=\"https://raw.githubusercontent.com/leonchen83/share/master/other/wechat_payment.png\" target=\"_blank\"\u003e\u003cimg src=\"https://github.com/leonchen83/share/blob/master/other/buymeacoffee.jpg?raw=true\" alt=\"Buy Me A Coffee\" style=\"height: 41px !important;width: 174px !important;box-shadow: 0px 3px 2px 0px rgba(190, 190, 190, 0.5) !important;-webkit-box-shadow: 0px 3px 2px 0px rgba(190, 190, 190, 0.5) !important;\" \u003e\u003c/a\u003e\n\nA tool that can parse, filter, split, and merge RDB files, as well as analyze memory usage offline. It can also sync data between two Redis instances and allows users to define their own sink services to migrate Redis data to custom destinations.\n\n[![Java CI](https://github.com/leonchen83/redis-rdb-cli/actions/workflows/maven.yml/badge.svg)](https://github.com/leonchen83/redis-rdb-cli/actions/workflows/maven.yml)\n[![Gitter](https://badges.gitter.im/leonchen83/redis-rdb-cli.svg)](https://gitter.im/leonchen83/redis-rdb-cli?utm_source=badge\u0026utm_medium=badge\u0026utm_campaign=pr-badge)\n[![Hex.pm](https://img.shields.io/hexpm/l/plug.svg?maxAge=2592000)](https://github.com/leonchen83/redis-rdb-cli/blob/master/LICENSE)  \n  \n## Chat with the Author\n  \n[![Join the chat at https://gitter.im/leonchen83/redis-rdb-cli](https://badges.gitter.im/leonchen83/redis-rdb-cli.svg)](https://gitter.im/leonchen83/redis-rdb-cli?utm_source=badge\u0026utm_medium=badge\u0026utm_campaign=pr-badge\u0026utm_content=badge)  \n  \n## Contact the Author\n  \n**chen.bao.yi@gmail.com**  \n  \n## Binary Releases\n\n[Binary Releases](https://github.com/leonchen83/redis-rdb-cli/releases)\n\n## Runtime Requirements\n\n```text\nJDK 1.8 or later\n```\n\n## Installation\n\n```shell\n$ wget https://github.com/leonchen83/redis-rdb-cli/releases/latest/download/redis-rdb-cli-release.zip\n$ unzip redis-rdb-cli-release.zip\n$ ./redis-rdb-cli/bin/rct -h\n\n# MacOS homebrew installation\n$ brew tap leonchen83/redis-rdb-cli\n$ brew install redis-rdb-cli\n$ rct -h\n```\n\n## Compiling from Source\n\n### Requirements\n```text\nJDK 1.8 or later\nMaven 3.3.1 or later\n```\n\n### Compile \u0026 Run\n```shell\n$ git clone https://github.com/leonchen83/redis-rdb-cli.git\n$ cd redis-rdb-cli\n$ mvn clean install -Dmaven.test.skip=true\n$ ./target/redis-rdb-cli-release/redis-rdb-cli/bin/rct -h \n```\n\n## Running with Docker\n\n```shell\n$ docker run -it --rm redisrdbcli/redis-rdb-cli:latest\n$ rct -V\n```\n\n## Windows Environment Variables\n  \nTo run the commands from any directory, add the `/path/to/redis-rdb-cli/bin` directory to your system's `Path` environment variable.\n  \n  \n## Usage\n\n### Mass Insertion\n\n```shell\n$ rct -f dump -s /path/to/dump.rdb -o /path/to/dump.aof -r\n$ cat /path/to/dump.aof | /redis/src/redis-cli -p 6379 --pipe\n```\n\n### Convert RDB to Dump Format\n\n```shell\n$ rct -f dump -s /path/to/dump.rdb -o /path/to/dump.aof\n```\n\n### Convert RDB to JSON Format\n\n```shell\n$ rct -f json -s /path/to/dump.rdb -o /path/to/dump.json\n```\n\n### Count Keys in RDB\n\n```shell\n$ rct -f count -s /path/to/dump.rdb -o /path/to/dump.csv\n```\n\n### Find Top 50 Largest Keys\n\n```shell\n$ rct -f mem -s /path/to/dump.rdb -o /path/to/dump.mem -l 50\n```\n\n### Diff RDBs\n\n```shell\n$ rct -f diff -s /path/to/dump1.rdb -o /path/to/dump1.diff\n$ rct -f diff -s /path/to/dump2.rdb -o /path/to/dump2.diff\n$ diff /path/to/dump1.diff /path/to/dump2.diff\n```\n\n### Convert RDB to RESP\n\n```shell\n$ rct -f resp -s /path/to/dump.rdb -o /path/to/appendonly.aof\n```\n\n### Sync Two Redis Instances\n```shell\n$ rst -s redis://127.0.0.1:6379 -m redis://127.0.0.1:6380 -r\n```\n\n### Sync a Single Instance to a Redis Cluster\n```shell\n$ rst -s redis://127.0.0.1:6379 -m redis://127.0.0.1:30001 -r -d 0\n```\n\n### Handling Infinite Loops in `rst`\n\n```shell\n# Set client-output-buffer-limit in the source Redis instance\n$ redis-cli config set client-output-buffer-limit \"slave 0 0 0\"\n$ rst -s redis://127.0.0.1:6379 -m redis://127.0.0.1:6380 -r\n```\n\n### Migrate RDB to a Remote Redis Instance\n\n```shell\n$ rmt -s /path/to/dump.rdb -m redis://192.168.1.105:6379 -r\n```\n\n### Downgrade Migration\n\n```shell\n# Migrate data from Redis 7 to Redis 6\n# For `dump_rdb_version`, please see the comments in redis-rdb-cli.conf\n$ sed -i 's/dump_rdb_version=-1/dump_rdb_version=9/g' /path/to/redis-rdb-cli/conf/redis-rdb-cli.conf\n$ rmt -s redis://com.redis7:6379 -m redis://com.redis6:6379 -r\n```\n\n### Handling Large Keys During Migration\n```shell\n# Set proto-max-bulk-len in the target Redis instance\n$ redis-cli -h ${host} -p 6380 -a ${pwd} config set proto-max-bulk-len 2048mb\n\n# Set Xms and Xmx for the redis-rdb-cli node\n$ export JAVA_TOOL_OPTIONS=\"-Xms8g -Xmx8g\"\n\n# Execute migration\n$ rmt -s redis://127.0.0.1:6379 -m redis://127.0.0.1:6380 -r\n```\n\n### Migrate RDB to a Remote Redis Cluster\n\nUsing `nodes.conf`:\n```shell\n$ rmt -s /path/to/dump.rdb -c ./nodes-30001.conf -r\n```\n\nAlternatively, you can connect to one of the cluster nodes directly:\n```shell\n$ rmt -s /path/to/dump.rdb -m redis://127.0.0.1:30001 -r\n```\n\n### Backup a Remote RDB\n\n```shell\n$ rdt -b redis://192.168.1.105:6379 -o /path/to/dump.rdb\n```\n\n### Backup Remote RDB and Change Database Index\n\n```shell\n$ rdt -b redis://192.168.1.105:6379 -o /path/to/dump.rdb --goal 3\n```\n\n### Filter an RDB File\n\n```shell\n$ rdt -b /path/to/dump.rdb -o /path/to/filtered-dump.rdb -d 0 -t string\n```\n\n### Split an RDB File using a Cluster's `nodes.conf`\n\n```shell\n$ rdt -s ./dump.rdb -c ./nodes.conf -o /path/to/folder -d 0\n```\n\n### Concat Multiple RDB Files\n\n```shell\n$ rdt -m ./dump1.rdb ./dump2.rdb -o ./dump.rdb -t hash\n```\n\n### Extract RDB and AOF from a Mixed File\n\n```shell\n$ rcut -s ./aof-use-rdb-preamble.aof -r ./dump.rdb -a ./appendonly.aof\n```\n\n### Additional Configuration\n\nAdditional configuration parameters can be modified in `/path/to/redis-rdb-cli/conf/redis-rdb-cli.conf`.\n\n### Filtering\n\nThe `rct`, `rdt`, and `rmt` commands support filtering by data `type`, `db` index, and `key` (using Java-style regular expressions). The `rst` command supports filtering by `db` index only.\n  \nFor example:\n```shell\n$ rct -f dump -s /path/to/dump.rdb -o /path/to/dump.aof -d 0\n$ rct -f dump -s /path/to/dump.rdb -o /path/to/dump.aof -t string hash\n$ rmt -s /path/to/dump.rdb -m redis://192.168.1.105:6379 -r -d 0 1 -t list\n$ rst -s redis://127.0.0.1:6379 -m redis://127.0.0.1:6380 -d 0\n```\n\n### Monitor Redis Server\n\n```shell\n# Step 1: \n# Open `/path/to/redis-rdb-cli/conf/redis-rdb-cli.conf`\n# and change the `metric_gateway` property from `none` to `influxdb`.\n#\n# Step 2:\n$ cd /path/to/redis-rdb-cli/dashboard\n$ docker-compose up -d\n#\n# Step 3:\n$ rmonitor -s redis://127.0.0.1:6379 -n standalone\n$ rmonitor -s redis://127.0.0.1:30001 -n cluster\n$ rmonitor -s redis-sentinel://sntnl-usr:sntnl-pwd@127.0.0.1:26379?master=mymaster\u0026authUser=usr\u0026authPassword=pwd -n sentinel\n#\n# Step 4:\n# Open `http://localhost:3000/d/monitor/monitor` in your browser. \n# Log in to Grafana with username `admin` and password `admin` to view the dashboard.\n```\n\n![monitor](./images/monitor.png)\n\n## Difference between `rmt` and `rst`\n\n1.  **`rmt`**: When `rmt` starts, the source Redis instance first performs a `BGSAVE` to generate an RDB snapshot. The `rmt` command migrates this snapshot file to the target Redis instance. The command terminates after the migration is complete.\n2.  **`rst`**: In addition to migrating the initial RDB snapshot, `rst` also syncs incremental data changes from the source to the target. It runs continuously until manually stopped (e.g., with `CTRL+C`). Note that `rst` only supports filtering by `db` index. For more details, see [Limitations of Migration](#limitations-of-migration).\n\n## Dashboard\n\nSince v0.1.9, the `rct -f mem` command supports visualizing its output on a Grafana dashboard.\n![memory](./images/memory.png)  \n\nTo enable this feature, you must have Docker and Docker Compose installed. Please refer to the official [Docker documentation](https://docs.docker.com/install/) for installation instructions.\nThen, run the following command:\n```shell\n$ cd /path/to/redis-rdb-cli/dashboard\n\n# Start\n$ docker-compose up -d\n\n# Stop\n$ docker-compose down\n```\n  \nNext, open `/path/to/redis-rdb-cli/conf/redis-rdb-cli.conf` and change the `metric_gateway` parameter from `none` to `influxdb`.\n  \nOpen `http://localhost:3000` in your browser to view the results from `rct -f mem`.  \n  \nIf you are deploying this tool across multiple instances, ensure that the `metric_instance` parameter is set to a unique value for each instance.  \n  \n## Using with Redis 6\n  \n### Redis 6 with SSL\n  \n1.  Use OpenSSL to generate a keystore:\n```shell\n$ cd /path/to/redis-6.0-rc1\n$ ./utils/gen-test-certs.sh\n$ cd tests/tls\n$ openssl pkcs12 -export -CAfile ca.crt -in redis.crt -inkey redis.key -out redis.p12\n```\n  \n2.  If the source and target Redis instances use the same keystore, configure the following parameters in `redis-rdb-cli.conf`:\n`source_keystore_path` and `target_keystore_path` should point to `/path/to/redis-6.0-rc1/tests/tls/redis.p12`.\nSet `source_keystore_pass` and `target_keystore_pass`.\n  \n3.  After configuring the SSL parameters, use the `rediss://` protocol in your commands to enable SSL, for example: `rst -s rediss://127.0.0.1:6379 -m rediss://127.0.0.1:30001 -r -d 0`\n  \n### Redis 6 with ACL\n\n1.  Use the following URI format to connect with ACL credentials:\n```shell\n$ rst -s redis://user:pass@127.0.0.1:6379 -m redis://user:pass@127.0.0.1:6380 -r -d 0\n```\n  \n2.  The specified `user` **MUST** have `+@all` permissions to execute the necessary commands.\n  \n## Hacking `rmt`\n\n### `rmt` Threading Model\n\nThe `rmt` command uses the following four parameters from `redis-rdb-cli.conf` to manage data migration:\n```properties\nmigrate_batch_size=4096\nmigrate_threads=4\nmigrate_flush=yes\nmigrate_retries=1\n```\n\nThe most important parameter is `migrate_threads`. A value of `4`, for example, means that the following threading model is used for migration:\n\n```text\nsingle redis ----\u003e single redis\n\n+--------------+         +----------+     thread 1      +--------------+\n|              |    +----| Endpoint |-------------------|              |\n|              |    |    +----------+                   |              |\n|              |    |                                   |              |\n|              |    |    +----------+     thread 2      |              |\n|              |    |----| Endpoint |-------------------|              |\n|              |    |    +----------+                   |              |\n| Source Redis |----|                                   | Target Redis |\n|              |    |    +----------+     thread 3      |              |\n|              |    |----| Endpoint |-------------------|              |\n|              |    |    +----------+                   |              |\n|              |    |                                   |              |\n|              |    |    +----------+     thread 4      |              |\n|              |    +----| Endpoint |-------------------|              |\n+--------------+         +----------+                   +--------------+\n``` \n\n```text\nsingle redis ----\u003e redis cluster\n\n+--------------+         +----------+     thread 1      +--------------+\n|              |    +----| Endpoints|-------------------|              |\n|              |    |    +----------+                   |              |\n|              |    |                                   |              |\n|              |    |    +----------+     thread 2      |              |\n|              |    |----| Endpoints|-------------------|              |\n|              |    |    +----------+                   |              |\n| Source Redis |----|                                   | Redis cluster|\n|              |    |    +----------+     thread 3      |              |\n|              |    |----| Endpoints|-------------------|              |\n|              |    |    +----------+                   |              |\n|              |    |                                   |              |\n|              |    |    +----------+     thread 4      |              |\n|              |    +----| Endpoints|-------------------|              |\n+--------------+         +----------+                   +--------------+\n``` \n\nThe key difference between migrating to a single instance versus a cluster lies in the use of `Endpoint` versus `Endpoints`. For cluster migrations, `Endpoints` represents a collection of `Endpoint` objects, each pointing to a `master` instance in the cluster. For example, in a Redis cluster with 3 masters and 3 replicas, if `migrate_threads` is set to `4`, the tool will establish a total of `3 * 4 = 12` connections to the master instances. \n\n### Migration Performance\n\nThe following three parameters affect migration performance:\n```properties\nmigrate_batch_size=4096\nmigrate_retries=1\nmigrate_flush=yes\n```\n\n1.  `migrate_batch_size`: By default, data is migrated using Redis pipelining. This parameter sets the batch size for the pipeline. If set to `1`, pipelining is effectively disabled, and each command is sent individually.\n2.  `migrate_retries`: If a socket error occurs, the tool will recreate the socket and retry the failed command. This parameter specifies the number of retry attempts.\n3.  `migrate_flush`: If set to `yes`, the output stream is flushed after every command. If set to `no`, the stream is flushed every 64KB. Note: Retries (`migrate_retries`) only take effect when `migrate_flush` is set to `yes`.\n\n### Migration Principle\n\n```text\n+---------------+             +-------------------+    restore      +---------------+ \n|               |             | redis dump format |----------------\u003e|               |\n|               |             |-------------------|    restore      |               |\n|               |   convert   | redis dump format |----------------\u003e|               |\n|    Dump rdb   |------------\u003e|-------------------|    restore      |  Target Redis |\n|               |             | redis dump format |----------------\u003e|               |\n|               |             |-------------------|    restore      |               |\n|               |             | redis dump format |----------------\u003e|               |\n+---------------+             +-------------------+                 +---------------+ \n```\n\n## Limitations of Migration\n\n1.  When migrating to a cluster, this tool uses the cluster's `nodes.conf` file and does not handle `MOVED` or `ASK` redirections. Therefore, a key limitation is that the cluster **MUST** be in a stable state during the migration. This means there should be no slots in a `migrating` or `importing` state, and no failovers (promoting a replica to master) should occur.\n2.  When using `rst` to migrate data to a cluster, the following commands are not supported: `PUBLISH`, `SWAPDB`, `MOVE`, `FLUSHALL`, `FLUSHDB`, `MULTI`, `EXEC`, `SCRIPT FLUSH`, `SCRIPT LOAD`, `EVAL`, `EVALSHA`.\n3.  Additionally, the following commands are **ONLY SUPPORTED WHEN ALL KEYS IN THE COMMAND BELONG TO THE SAME SLOT** (e.g., `del {user}:1 {user}:2`): `RPOPLPUSH`, `SDIFFSTORE`, `SINTERSTORE`, `SMOVE`, `ZINTERSTORE`, `ZUNIONSTORE`, `DEL`, `UNLINK`, `RENAME`, `RENAMENX`, `PFMERGE`, `PFCOUNT`, `MSETNX`, `BRPOPLPUSH`, `BITOP`, `MSET`, `COPY`, `BLMOVE`, `LMOVE`, `ZDIFFSTORE`, `GEOSEARCHSTORE`, `MSETEX`.\n\n## Hacking `ret`\n\n### What the `ret` Command Does\n\n1.  The `ret` command allows users to define their own sink services to send Redis data to other systems, such as MySQL or MongoDB.\n2.  It uses Java's Service Provider Interface (SPI) to load custom extensions.\n\n### How to Implement a Sink Service\n\nFollow the steps below to implement your own sink service.\n\n1.  Create a new Maven project:\n```xml\n\u003c?xml version=\"1.0\" encoding=\"UTF-8\"?\u003e\n\u003cproject xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\"\u003e\n    \u003cmodelVersion\u003e4.0.0\u003c/modelVersion\u003e\n    \n    \u003cgroupId\u003ecom.your.company\u003c/groupId\u003e\n    \u003cartifactId\u003eyour-sink-service\u003c/artifactId\u003e\n    \u003cversion\u003e1.0.0\u003c/version\u003e\n    \n    \u003cproperties\u003e\n        \u003cproject.build.sourceEncoding\u003eUTF-8\u003c/project.build.sourceEncoding\u003e\n        \u003cmaven.compiler.source\u003e1.8\u003c/maven.compiler.source\u003e\n        \u003cmaven.compiler.target\u003e1.8\u003c/maven.compiler.target\u003e\n    \u003c/properties\u003e\n\n    \u003cdependencies\u003e\n        \u003cdependency\u003e\n            \u003cgroupId\u003ecom.moilioncircle\u003c/groupId\u003e\n            \u003cartifactId\u003eredis-rdb-cli-api\u003c/artifactId\u003e\n            \u003cversion\u003e1.9.0\u003c/version\u003e\n            \u003cscope\u003eprovided\u003c/scope\u003e\n        \u003c/dependency\u003e\n        \u003cdependency\u003e\n            \u003cgroupId\u003ecom.moilioncircle\u003c/groupId\u003e\n            \u003cartifactId\u003eredis-replicator\u003c/artifactId\u003e\n            \u003cversion\u003e[3.9.0, )\u003c/version\u003e\n            \u003cscope\u003eprovided\u003c/scope\u003e\n        \u003c/dependency\u003e\n        \u003cdependency\u003e\n            \u003cgroupId\u003eorg.slf4j\u003c/groupId\u003e\n            \u003cartifactId\u003eslf4j-api\u003c/artifactId\u003e\n            \u003cversion\u003e1.7.25\u003c/version\u003e\n            \u003cscope\u003eprovided\u003c/scope\u003e\n        \u003c/dependency\u003e\n        \n        \u003c!-- \n        \u003cdependency\u003e\n            other dependencies\n        \u003c/dependency\u003e\n        --\u003e\n        \n    \u003c/dependencies\u003e\n    \n    \u003cbuild\u003e\n        \u003cplugins\u003e\n            \u003cplugin\u003e\n                \u003cartifactId\u003emaven-assembly-plugin\u003c/artifactId\u003e\n                \u003cversion\u003e3.1.0\u003c/version\u003e\n                \u003cconfiguration\u003e\n                    \u003cdescriptorRefs\u003e\n                        \u003cdescriptorRef\u003ejar-with-dependencies\u003c/descriptorRef\u003e\n                    \u003c/descriptorRefs\u003e\n                \u003c/configuration\u003e\n                \u003cexecutions\u003e\n                    \u003cexecution\u003e\n                        \u003cid\u003emake-assembly\u003c/id\u003e\n                        \u003cphase\u003epackage\u003c/phase\u003e\n                        \u003cgoals\u003e\n                            \u003cgoal\u003esingle\u003c/goal\u003e\n                        \u003c/goals\u003e\n                    \u003c/execution\u003e\n                \u003c/executions\u003e\n            \u003c/plugin\u003e\n            \u003cplugin\u003e\n                \u003cgroupId\u003eorg.apache.maven.plugins\u003c/groupId\u003e\n                \u003cartifactId\u003emaven-compiler-plugin\u003c/artifactId\u003e\n                \u003cversion\u003e3.8.1\u003c/version\u003e\n                \u003cconfiguration\u003e\n                    \u003csource\u003e${maven.compiler.source}\u003c/source\u003e\n                    \u003ctarget\u003e${maven.compiler.target}\u003c/target\u003e\n                    \u003cencoding\u003e${project.build.sourceEncoding}\u003c/encoding\u003e\n                \u003c/configuration\u003e\n            \u003c/plugin\u003e\n        \u003c/plugins\u003e\n    \u003c/build\u003e\n\u003c/project\u003e\n```\n\n2.  Implement the `SinkService` interface:\n```java\npublic class YourSinkService implements SinkService {\n\n    @Override\n    public String sink() {\n        return \"your-sink-service\";\n    }\n\n    @Override\n    public void init(File config) throws IOException {\n        // Parse your external sink config\n    }\n\n    @Override\n    public void onEvent(Replicator replicator, Event event) {\n        // Your sink business logic\n    }\n}\n```\n\n3.  Register the service using Java SPI:\nCreate the file `src/main/resources/META-INF/services/com.moilioncircle.redis.rdb.cli.api.sink.SinkService` with the following content:\n```text\nyour.package.YourSinkService\n```\n\n4.  Package and Deploy:\n```shell\n$ mvn clean install\n$ cp ./target/your-sink-service-1.0.0-jar-with-dependencies.jar /path/to/redis-rdb-cli/lib\n```\n\n5.  Run Your Sink Service:\n```shell\n$ ret -s redis://127.0.0.1:6379 -c config.conf -n your-sink-service\n```\n\n6.  Debug Your Sink Service:\n```java  \n    public static void main(String[] args) throws Exception {\n        Replicator replicator = new RedisReplicator(\"redis://127.0.0.1:6379\");\n        Runtime.getRuntime().addShutdownHook(new Thread(() -\u003e {\n            Replicators.closeQuietly(replicator);\n        }));\n        replicator.addExceptionListener((rep, tx, e) -\u003e {\n            throw new RuntimeException(tx.getMessage(), tx);\n        });\n        SinkService sink = new YourSinkService();\n        sink.init(new File(\"/path/to/your-sink.conf\"));\n        replicator.addEventListener(new AsyncEventListener(sink, replicator, 4, Executors.defaultThreadFactory()));\n        replicator.open();\n    }\n```\n\n### How to Implement a Formatter Service\n\n1.  Create `YourFormatterService` that extends `AbstractFormatterService`:\n```java\npublic class YourFormatterService extends AbstractFormatterService {\n\n    @Override\n    public String format() {\n        return \"test\";\n    }\n\n    @Override\n    public Event applyString(Replicator replicator, RedisInputStream in, int version, byte[] key, int type, ContextKeyValuePair context) throws IOException {\n        byte[] val = new DefaultRdbValueVisitor(replicator).applyString(in, version);\n        getEscaper().encode(key, getOutputStream());\n        getEscaper().encode(val, getOutputStream());\n        getOutputStream().write('\\n');\n        return context;\n    }\n}\n```\n\n2.  Register the formatter using Java SPI:\nCreate the file `src/main/resources/META-INF/services/com.moilioncircle.redis.rdb.cli.api.format.FormatterService` with the following content:\n```text\nyour.package.YourFormatterService\n```\n\n3.  Package and Deploy:\n```shell\n$ mvn clean install\n$ cp ./target/your-service-1.0.0-jar-with-dependencies.jar /path/to/redis-rdb-cli/lib\n```\n\n4.  Run your formatter service:\n```shell\n$ rct -f test -s redis://127.0.0.1:6379 -o ./out.csv -t string -d 0 -e json\n```\n\n## Contributors\n  \n* [Baoyi Chen](https://github.com/leonchen83)\n* [Jintao Zhang](https://github.com/tao12345666333)\n* [Maz Ahmadi](https://github.com/cmdshepard)\n* [Anish Karandikar](https://github.com/anishkny)\n* [Air](https://github.com/air3ijai)\n* [Raghu Nandan B S](https://github.com/raghu-nandan-bs)\n* [Mads Nedergaard](https://github.com/madsnedergaard)\n\n## Consulting\n\nCommercial support for `redis-rdb-cli` is available. The following services are currently offered:\n* Onsite consulting: $10,000 per day\n* Onsite training: $10,000 per day\n\nYou may also contact `Baoyi Chen` directly at [chen.bao.yi@gmail.com](mailto:chen.bao.yi@gmail.com).\n\n## Supported by 宁文君\n\n27 January 2023 was a sad day; I lost my mother, 宁文君. She was encouraging and supported me in developing this tool. Every time a company used this tool, she got excited like a child and encouraged me to keep going. Without her, I couldn't have maintained this tool for so many years. Even though I didn't achieve much, she was still proud of me. R.I.P and may God bless her.\n\n## Supported by IntelliJ IDEA\n\n[IntelliJ IDEA](https://www.jetbrains.com/?from=redis-rdb-cli) is a Java integrated development environment (IDE) for developing computer software.\nIt is developed by JetBrains (formerly known as IntelliJ), and is available as an Apache 2 Licensed community edition,\nand in a proprietary commercial edition. Both can be used for commercial development.","funding_links":["https://www.paypal.com/paypalme/leonchen83"],"categories":["Java"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fleonchen83%2Fredis-rdb-cli","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fleonchen83%2Fredis-rdb-cli","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fleonchen83%2Fredis-rdb-cli/lists"}