Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/migalabs/eth-pokhar
Tool to identify validators <> entities in the Ethereum consensus layer
https://github.com/migalabs/eth-pokhar
Last synced: about 8 hours ago
JSON representation
Tool to identify validators <> entities in the Ethereum consensus layer
- Host: GitHub
- URL: https://github.com/migalabs/eth-pokhar
- Owner: migalabs
- Created: 2023-12-28T16:05:15.000Z (11 months ago)
- Default Branch: main
- Last Pushed: 2024-02-23T16:24:32.000Z (9 months ago)
- Last Synced: 2024-02-29T09:42:21.122Z (9 months ago)
- Language: Go
- Size: 446 KB
- Stars: 1
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# eth-pokhar
Eth-pokhar is a go tool that helps in the process of identifying the pool/entity that operates each validator in the Ethereum beacon chain.
Identifying staking entities is tricky since this isn’t on-chain data in most cases. In the case of pools like Lido and Rocketpool, since they use smart contracts for creating their validators, the data is on-chain and can be easily identified.
For the rest of the entities, other methods can be used like observing patterns in the depositor addresses like the ones found in this repository: [eth-deposits](https://github.com/alrevuelta/eth-deposits), and other off-chain data from contacts/data sources. When creating validators an address must deposit 32 ETH on the beaconchain contract. In most cases, entities share the same deposit address throughout multiple validators. By knowing a few of these cases, one can extrapolate the information and identify all of the validators that were generated by those addresses and thus identify the entities.
This tool is used for tagging validators in [ethseer.io](https://ethseer.io/?network=mainnet).
## Pre-requisites
To use the tool, the following requirements must be met:
- An alchemy API key (the free tier is enough). See [here](https://www.alchemy.com/pricing)
- Access to a Ethereum EL node (you can also use alchemy for this)Expect this tool to make the following amount of requests to the Ethereum EL node on the first run:
- ~1.5m [eth_getTransactionReceipt](https://docs.alchemy.com/reference/eth-gettransactionreceipt) requests
- ~1.5m [eth_call](https://docs.alchemy.com/reference/eth-call) requestsAnd the following amount of requests to the alchemy API on each run:
- ~300k [alchemy_getAssetTransfers](https://docs.alchemy.com/reference/alchemy-getassettransfers) requests
The size of the fully synced database is around 3GB.
## Available commands
### `beacon_depositors_transactions`
Fetches the transactions of the depositors of the beaconchain contract.
Available options (configurable in the `.env` file):
```
OPTIONS:
--el-endpoint value Execution node endpoint (default: http://localhost:8545) [$EL_ENDPOINT]
--db-url value Database where to store transactions (default: postgres://user:password@localhost:5432/dbName) [$DB_URL]
--log-level value Log level: debug, warn, info, error (default: info) [$LOG_LEVEL]
--workers-num value Number of workers to process API requests (default: 10) [$WORKER_NUM]
--alchemy-url value Alchemy url (default: https://eth-mainnet.g.alchemy.com/v2/KEY) [$ALCHEMY_URL]
--only-deposits Only fetch deposits. If not set, it will fetch all new depositor transactions which can take up to 20 hours (default: false)
--help, -h show help
```### `identify`
Identify the pool in which validators are participating or the entity who operates the validators.
Available options (configurable in the `.env` file):
```
OPTIONS:
--el-endpoint value Execution node endpoint (default: http://localhost:8545) [$EL_ENDPOINT]
--db-url value Database where to store transactions (default: postgres://user:password@localhost:5432/dbName) [$DB_URL]
--log-level value Log level: debug, warn, info, error (default: info) [$LOG_LEVEL]
--alchemy-url value Alchemy url (default: https://eth-mainnet.g.alchemy.com/v2/KEY) [$ALCHEMY_URL]
--workers-num value Number of workers to process API requests (default: 10) [$WORKER_NUM]
--whale-threshold value Minimum number of validators to be considered a whale (default: 100) [$WHALE_THRESHOLD]
--recreate-table Recreate the t_identified_validators table, meant to be used when one of the methodologies of identification changes (default: false)
--help, -h show help
```## Running with docker-compose (recommended)
To run the tool with docker, you can use the following commands:
First, create a `.env` file on the root folder. You can use the `.env.example` file as a template.
Then, run the following command to build the tool:
```bash
docker-compose build
```Finally, run the tool with the following command:
```bash
docker-compose up -d
```In case that you don't want to update de depositor transactions (which can take up to 20 hours), you can set `ONLY_DEPOSITS=true` in the `.env` file and run the tool normally or use the following command:
```bash
ONLY_DEPOSITS=true docker-compose up -d
```This will set the [`--only-deposits`](#beacon_depositors_transactions) flag to true.
## Output
The tool will create a database with the following tables:
### `t_beacon_deposits`
This table stores the deposits made to the beaconchain contract. It has the following columns:
- `f_block_num`: The block number in which the deposit was made.
- `f_depositor`: The address of the depositor.
- `f_tx_hash`: The transaction hash of the deposit.
- `f_validator_pubkey`: The public key of the validator.### `t_beacon_depositors_transactions`
This table stores the incoming/outgoing transactions of the depositors of the beaconchain contract. It has the following columns:
- `f_block_num`: The block number in which the transaction was made.
- `f_value`: The value of the transaction.
- `f_from`: The address from which the transaction was made.
- `f_to`: The address to which the transaction was made.
- `f_tx_hash`: The transaction hash of the transaction.
- `f_depositor`: The address of the depositor to which the transaction is related.### `t_depositors_insert`
This table stores the depositors that are used to identify the pool in which the validators are participating See [Utilizing custom off-chain data](#utilizing-custom-off-chain-data) for more information. It has the following columns:
- `f_depositor`: The address of the depositor.
- `f_pool_name`: The name of the pool in which the validators are participating.### `t_validators_insert`
This table stores the validators that are used to identify the pool in which the validators are participating. See [Utilizing custom off-chain data](#utilizing-custom-off-chain-data) for more information. It has the following columns:
- `f_validator_pubkey`: The public key of the validator.
- `f_pool_name`: The name of the pool in which the validators are participating.### `t_lido`
This table stores the validators that are participating in the Lido pool. See [Lido operators](https://operatorportal.lido.fi/) for more information. It has the following columns:
- `f_validator_pubkey`: The public key of the validator.
- `f_operator`: The name of the operator of the validator.
- `f_operator_index`: The index of the operator in the Lido pool.### `t_rocketpool`
This table stores the validators that are participating in the Rocketpool pool. It has the following columns:
- `f_validator_pubkey`: The public key of the validator.
### `t_identified_validators` (End result)
This table stores the validators with the pool/entity that operates them. Unidentified validators, non-whales (see [Whale tagging](#whale-tagging)) will have a `f_pool_name` value of `solo_stakers`. It has the following columns:
- `f_validator_pubkey`: The public key of the validator.
- `f_pool_name`: The name of the pool in which the validators are participating.## Utilizing custom off-chain data
As mentioned before, the tool can be used to identify validators by using off-chain data. For this purpose, two tables are created in the database on the first run: `t_depositors_insert` and `t_validators_insert`.
**Important note**: addresses must be all lowercase and without the `0x` prefix.
### `t_depositors_insert`
This table has the columns `f_depositor` and `f_pool_name`. The `identify` command will use this table to identify the pool in which the validators are participating. The `f_depositor` column is the address of the depositor and the `f_pool_name` is the name of the pool in which the validators are participating. All validators that have the same depositor address will be tagged with the `f_pool_name` value.
### `t_validators_insert`
This table has the columns `f_validator_pubkey` and `f_pool_name`. The `identify` command will use this table to identify the pool in which the validators are participating. The `f_validator_pubkey` column is the address of the validator and the `f_pool_name` is the name of the pool in which the validators are participating. These values will be used to tag the validators and will override any other tag that the validator might have been given.
## Whale tagging
We use the following definition of a whale: unidentified depositor responsible for at least [`WHALE_THRESHOLD`](https://github.com/migalabs/eth-pokhar?tab=readme-ov-file#identify) validators. The tool will tag the whale with the `whale_0x....` tag (first 4 characters of the depositor address).
## Identification priority
Since the end table `t_identified_validators` is the result of the identification process, validators' pool/entity will be tagged in the following order (if the validator is already tagged, the next step will override the previous tag, resulting in the last tag being the one that is stored):
## Database migrations
In case you encounter any issue with the database, you can force the database version using the golang-migrate command line. Please refer [here](https://github.com/golang-migrate/migrate) for more information.
More specifically, one could clean the migrations by forcing the version with
`migrate -path / -database "postgresql://username:secretkey@localhost:5432/database_name?sslmode=disable" force `
If specific upgrades or downgrades need to be done manually, one could do this with
`migrate -path database/migration/ -database "postgresql://username:secretkey@localhost:5432/database_name?sslmode=disable" -verbose up`## Benchmarks
Utilizing Alchemy as EL node and API, with a local database, these are the benchmarks for the tool:
- `beacon_depositors_transactions`:
- Fetching deposits on the first run: 1h 16m
- Fetching depositors transactions: 15h 22m
- Total time: 16h 38m
- `identify`:
- Fetching Rocketpool validators on the first run: 38m
- Fetching Lido validators on the first run: 4h 13m
- Total time: 4h 51m
- Total time for the first run: 21h 29mIf running the tool on a weekly basis, expect the tool to take around 16h to run considering that the process of fetching depositors transactions will be done on every run unless the `--only-deposits` flag is set, in which case the tool will take around 10m to run.
The size of the fully synced database is around 3GB.
## Maintainers
@santi1234567