Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/obukhov/go-redis-migrate
Tool to copy data from one redis instance to another (filtered by glob pattern)
https://github.com/obukhov/go-redis-migrate
devops infrastructure redis
Last synced: 14 days ago
JSON representation
Tool to copy data from one redis instance to another (filtered by glob pattern)
- Host: GitHub
- URL: https://github.com/obukhov/go-redis-migrate
- Owner: obukhov
- License: apache-2.0
- Created: 2019-02-02T12:58:25.000Z (almost 6 years ago)
- Default Branch: master
- Last Pushed: 2023-05-05T21:31:44.000Z (over 1 year ago)
- Last Synced: 2024-08-01T19:40:11.216Z (3 months ago)
- Topics: devops, infrastructure, redis
- Language: Go
- Homepage:
- Size: 4.3 MB
- Stars: 34
- Watchers: 1
- Forks: 9
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# go-redis-migrate
Script to copy data by keys pattern from one redis instance to another.
["Copy 1 Million Redis Keys in 2 Minutes with Golang" blog post](https://blog.dclg.net/copy-1-million-redis-keys-in-2-minutes-with-golang)
### Usage
```bash
go-redis-migrate copy --pattern="prefix:*"
```*Source*, *destination* - can be provided as just `:` or in Redis URL format: `redis://[:@]:[/]`
*Pattern* - can be glob-style pattern supported by [Redis SCAN](https://redis.io/commands/scan) command.Other flags:
```bash
--report int Report current status every N seconds (default 1)
--scanCount int COUNT parameter for redis SCAN command (default 100)
--exportRoutines int Number of parallel export goroutines (default 30)
--pushRoutines int Number of parallel push goroutines (default 30)
```### Installation
Download the binary for your platform from the [releases page](https://github.com/obukhov/go-redis-migrate/releases).
### General idea
There are 3 main stages of copying keys:
1. Scanning keys in source
2. Dumping values and TTLs from source
3. Restoring values and TTLs in destinationScanning is performed with a single goroutine, scanned keys are sent to keys channel (type is chan string). From keys
channel N export goroutines are consuming keys and perform `DUMP` and `PTTL` for them as a pipeline command. Results
are combined in KeyDump structure and transferred to channel of this type. Another M push goroutines are consuming from
the channel and perform `RESTORE` command on the destination instance.To guarantee that all keys are exported and restored `sync.WaitingGroup` is used. To monitor current status there is a
separate goroutine that outputs value of atomic counters (for each stage) every K seconds.### Performance tests
Performed on a laptop with redis instances, running in docker.
#### Test #1
Source database: 453967 keys.
Keys to copy: 10000 keys.| | Version 1.0 (no concurrency) | Version 2.0 (read-write concurrency) |
|----|------------------------------|--------------------------------------|
| #1 | 17.79s | 4.82s |
| #2 | 18.01s | 5.88s |
| #3 | 17.98s | 5.06s |#### Test #2
Source database: 453967 keys.
Keys to copy: 367610 keys.| | Version 1 (no concurrency) | Version 2.0 (read-write concurrency) |
|----|----------------------------|--------------------------------------|
| #1 | 8m57.98s | 58.78s |
| #2 | 8m44.98s | 55.35s |
| #3 | 8m58.07s | 57.25s |