Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/satoshipay/stellar-core-parallel-catchup
Fast sync a Stellar Core validator node including full history with parallelization
https://github.com/satoshipay/stellar-core-parallel-catchup
Last synced: about 2 months ago
JSON representation
Fast sync a Stellar Core validator node including full history with parallelization
- Host: GitHub
- URL: https://github.com/satoshipay/stellar-core-parallel-catchup
- Owner: satoshipay
- License: mit
- Created: 2018-12-27T19:06:03.000Z (almost 6 years ago)
- Default Branch: master
- Last Pushed: 2021-01-25T13:27:34.000Z (over 3 years ago)
- Last Synced: 2024-05-11T11:32:49.130Z (5 months ago)
- Language: Shell
- Size: 20.5 KB
- Stars: 23
- Watchers: 4
- Forks: 10
- Open Issues: 2
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-stellar - Parallel Stellar Core Catchup - Sync a full Stellar validator node (including full history) as fast as possible. Split the big ledger into small chunks of size CHUNK_SIZE. Run a catchup for the chunks in parallel with WORKERS worker processes. Stitch together the resulting database and history archive. (Developer Resources)
- awesome-stellar-cn - Parallel Stellar Core Catchup - 尽快同步完整的 Stellar 验证节点(包括完整历史记录)。将大账本分成大小为 CHUNK_SIZE 的小块。与 WORKERS 工作进程并行运行块的追赶。将生成的数据库和历史存档拼接在一起。 (开发者资源)
README
# Parallel Stellar Core Catchup ⚡
## Background
### Goal
Sync a full Stellar validator node (including full history) as fast as possible.
### Problem
A full catchup takes weeks/months – even without publishing to an archive.
### Idea
* Split the big ledger into small chunks of size `CHUNK_SIZE`.
* Run a catchup for the chunks in parallel with `WORKERS` worker processes.
* Stitch together the resulting database and history archive.## Usage
```
./catchup.sh DOCKER_COMPOSE_FILE LEDGER_MIN LEDGER_MAX CHUNK_SIZE WORKERS
```Arguments:
* `DOCKER_COMPOSE_FILE`: use `docker-compose.pubnet.yaml` for the public network (`docker-compose.testnet.yaml` for testnet).
* `LEDGER_MIN`: smallest ledger number you want. Use `1` for doing a full sync.
* `LEDGER_MAX`: largest ledger number you want, usually you'll want the latest one which is exposed as `core_latest_ledger` in any synced Horizon server, e.g. https://stellar-horizon.satoshipay.io/.
* `CHUNK_SIZE`: number of ledgers to work on in one worker.
* `WORKERS`: number of workers that should be spawned. For best performance this should not exceed the number of CPUs.## Hardware sizing and timing examples
* 2019-05-19: 23 hours with a `CHUNK_SIZE` of `32768` and 50 workers on a `n1-standard-64` machine on Google Cloud (64 CPUs, 240GB RAM, 2TB SSD)
```
./catchup.sh docker-compose.pubnet.yaml 1 23920640 32768 50 2>&1 | tee catchup.log
```* 2018-12-20: 24 hours with a `CHUNK_SIZE` of `32768` and 32 workers on a `n1-standard-32` machine on Google Cloud (32 CPUs, 120GB RAM, 1TB SSD)
```
./catchup.sh docker-compose.pubnet.yaml 1 20971520 32768 32 2>&1 | tee catchup.log
```* ... add your achieved result here by submitting a PR.
## Example run on dedicated Google Cloud machine
```
sudo apt-get update
sudo apt-get install -y \
apt-transport-https \
ca-certificates \
curl \
gnupg2 \
software-properties-common \
python-pip
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/debian \
$(lsb_release -cs) \
stable"
sudo apt-get update
sudo apt-get install -y docker-ce
sudo pip install docker-compose
echo '{"default-address-pools":[{"base":"172.80.0.0/16","size":29}]}' | sudo tee /etc/docker/daemon.json
sudo usermod -G docker andre
sudo reboot
# log in again and check whether docker works
docker ps
``````
git clone [email protected]:satoshipay/stellar-core-parallel-catchup.git
cd stellar-core-parallel-catchup
./catchup.sh docker-compose.pubnet.yaml 1 20971520 32768 32 2>&1 | tee catchup.log
```You will get 3 important pieces of data for Stellar Core:
* SQL database: if you need to move the data to another container/machine you can dump the database by running the following:
```
docker exec catchup-result_stellar-core-postgres_1 pg_dump -F d -f catchup-sqldump -j 10 -U postgres -d stellar-core
```Then copy the `catchup-sqldump` directory to the target container/machine and restore with `pg_restore`.
* `data-result` directory: contains the `buckets` directory that Stellar Core needs for continuing with the current state in the SQL database.
* `history-result` directory: contains the full history that can be published to help other validator nodes to catch up (e.g., S3, GCS, IPFS, or any other file storage).Note: make sure you have a consistent state of the three pieces of data before starting Stellar Core in SCP mode (e.g., when moving data to another machine).
## Reset
If you need to start from scratch again you can delete all docker-compose projects:
```
for PROJECT in $(docker ps --filter "label=com.docker.compose.project" -q | xargs docker inspect --format='{{index .Config.Labels "com.docker.compose.project"}}'| uniq | grep catchup-); do docker-compose -f docker-compose.pubnet.yaml -p $PROJECT down -v; done
docker volume prune
```