{"id":13448895,"url":"https://github.com/tablelandnetwork/go-tableland","last_synced_at":"2025-05-09T02:40:51.542Z","repository":{"id":41354677,"uuid":"431876466","full_name":"tablelandnetwork/go-tableland","owner":"tablelandnetwork","description":"Go implementation of the Tableland database/validator - run your own node, handling on-chain events and serving read-queries","archived":false,"fork":false,"pushed_at":"2024-08-22T16:58:40.000Z","size":27160,"stargazers_count":55,"open_issues_count":35,"forks_count":11,"subscribers_count":7,"default_branch":"main","last_synced_at":"2025-05-06T12:15:33.137Z","etag":null,"topics":["database","golang","sql","sqlite","tableland","web3"],"latest_commit_sha":null,"homepage":"https://tableland.xyz/","language":"Go","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/tablelandnetwork.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":".github/CODEOWNERS","security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null}},"created_at":"2021-11-25T14:35:05.000Z","updated_at":"2025-03-20T15:27:07.000Z","dependencies_parsed_at":"2024-01-07T06:00:30.581Z","dependency_job_id":"69846e27-d841-43a6-9164-2a6ee3ee7d2e","html_url":"https://github.com/tablelandnetwork/go-tableland","commit_stats":null,"previous_names":["textileio/go-tableland"],"tags_count":43,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tablelandnetwork%2Fgo-tableland","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tablelandnetwork%2Fgo-tableland/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tablelandnetwork%2Fgo-tableland/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tablelandnetwork%2Fgo-tableland/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/tablelandnetwork","download_url":"https://codeload.github.com/tablelandnetwork/go-tableland/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":253179306,"owners_count":21866709,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["database","golang","sql","sqlite","tableland","web3"],"created_at":"2024-07-31T06:00:23.936Z","updated_at":"2025-05-09T02:40:51.518Z","avatar_url":"https://github.com/tablelandnetwork.png","language":"Go","readme":"# Tableland Validator\n\n[![Review](https://github.com/tablelandnetwork/go-tableland/actions/workflows/review.yml/badge.svg)](https://github.com/tablelandnetwork/go-tableland/actions/workflows/review.yml)\n[![Test](https://github.com/tablelandnetwork/go-tableland/actions/workflows/test.yml/badge.svg)](https://github.com/tablelandnetwork/go-tableland/actions/workflows/test.yml)\n[![Go Reference](https://pkg.go.dev/badge/github.com/textileio/go-tableland.svg)](https://pkg.go.dev/github.com/textileio/go-tableland)\n[![Go Report Card](https://goreportcard.com/badge/github.com/textileio/go-tableland)](https://goreportcard.com/report/github.com/textileio/go-tableland)\n[![License](https://img.shields.io/github/license/tablelandnetwork/go-tableland.svg)](./LICENSE)\n[![Release](https://img.shields.io/github/release/tablelandnetwork/go-tableland.svg)](https://github.com/tablelandnetwork/go-tableland/releases/latest)\n[![standard-readme compliant](https://img.shields.io/badge/standard--readme-OK-green.svg)](https://github.com/RichardLitt/standard-readme)\n\n\u003e Go implementation of the Tableland database—run your own node, handling on-chain mutating events and serving read-queries.\n\n## Table of Contents\n\n- [Background](#background)\n  - [What is a validator?](#what-is-a-validator)\n  - [Validator and network relationship](#validator-and-network-relationship)\n  - [Running a validator](#running-a-validator)\n- [Usage](#usage)\n  - [System requirements](#system-requirements)\n  - [Firewall configuration](#firewall-configuration)\n  - [System prerequisites](#system-prerequisites)\n  - [Run the validator](#run-the-validator)\n  - [Docker Compose setup](#docker-compose-setup)\n  - [Backups and other routines](#backups-and-other-routines)\n- [Development](#development)\n  - [Configuration](#configuration)\n- [Contributing](#contributing)\n- [License](#license)\n\n## Background\n\n`go-tableland` is a Go language implementation of a Tableland node, enabling developers and service providers to run nodes on the Tableland network and host databases for web3 users and applications. Note that the Tableland protocol is currently in open beta, so node operators have the opportunity to be one of the early network adopters while the responsibilities of the validator will continue to change as the Tableland protocol evolves.\n\n### What is a validator?\n\nValidators are the execution unit/actors of the protocol.\n\nThey have the following responsibilities:\n\n- Listen to on-chain events to materialize Tableland-compliant SQL queries in a database engine (currently, SQLite by default).\n- Serve read-queries (e.g., `SELECT * FROM foo_69_1`) to the external world.\n\n\u003e In the future, validators will have more responsibilities in the network.\n\n### Validator and network relationship\n\nThe following diagram describes a high level interaction between the validator, EVM chains, and the external world:\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"https://user-images.githubusercontent.com/13358940/249310798-a65732b9-a48b-4547-bc4d-71af8bd4e09f.png\" width='80%'/\u003e\n\u003c/p\u003e\n\nTo better understand the usual mechanics of the validator, let’s go through a typical use case where a user mints a table, adds data to the table, and reads from it:\n\n1. The user will mint a table (ERC721) from the Tableland `Registry` smart contract on a supported EVM chain.\n2. The `Registry` contract will emit a `CreateTable` event containing the `CREATE TABLE` statement as extra data.\n3. Validators will detect the new event and execute the `CREATE TABLE` statement.\n4. The user will call the `mutate` method in the `Registry` smart contract, with mutating statements such as `INSERT INTO ...`, `UPDATE ...`, `DELETE FROM ...`, etc.\n5. The `Registry` contract, as a result of that call, will emit a `RunSQL` event that contains the mutating SQL statement as extra data.\n6. The validators will detect the new event and execute the mutating query in the corresponding table, assuming the user has the right permissions (e.g., table ownership and/or smart contract defined access controls).\n7. The user can query the `/query?statement=...` REST endpoint of the validator to execute read-queries (e.g., `SELECT * FROM ...`), to see the materialized result of its interaction with the smart contract.\n\n\u003e The description above is optimized to understand the general mechanics of the validator. Minting tables and executing mutating statements also imply more work both at the smart contract and validator levels (e.g., ACL enforcing), which are being omitted here for simplicity sake.\n\nThe validator detects the smart contract events using an EVM node API (e.g., `geth` node), which can be self-hosted or served by providers (e.g., Alchemy, Infura, etc).\n\nIf you're curious about Tableland network growth, eager to contribute, or interested in experimenting, we encourage you to try running a validator. To get started, follow the step-by-step instructions provided below. We appreciate your interest and welcome any questions or feedback you may have during the process; stay tuned for updates and developments in our [Discord](https://tableland.xyz/discord) and [Twitter](https://twitter.com/tableland).\n\nFor projects that want to _use_ the validator API, Tableland [maintains a public gateway](https://docs.tableland.xyz/gateway-api) that can be used to query the network.\n\n### Running a validator\n\nRunning a validator only involves running a single process. Since we use SQLite as the default database engine, it is embedded and has many advantages:\n\n- There’s no separate process for the database.\n- There’s no inter-process communication between the validator and the database.\n- There’s no separate configuration or monitoring needed for the database.\n\nWe provide everything you need to run a validator with a single command using a docker-compose setup. This will automatically build everything from the source code, making it platform-independent since most OSes support docker. The build process is also dockerized, so node operators don’t need to worry about installing compilers or similar.\n\nIf you like creating your own setup (e.g., run raw binaries, use systemd, k8, etc.), we’re also planning to automate versioned Docker images or compiled executables. If there are other setups you're interested in, feel free to let us know or even share your own setup.\n\nThe [Docker Compose setup](#docker-compose-setup) section below describes how to run a validator in more detail, including:\n\n- Folder structure.\n- Configuration files.\n- Where the state of the validator lives.\n- Baked in observability stack (i.e., Prometheus + Grafana with dashboard).\n- Optional `healthbot` process to have an end-to-end (e2e) healthiness check of the validator.\n\nReviewing this section is _strongly_ recommended but not strictly necessary.\n\n## Usage\n\n### System requirements\n\nCurrently, we recommend running the validator on a machine that has at least:\n\n- 4 vCPUs.\n- 8GiB of RAM.\n- SSD disk with 10GiB of free space.\n- Reliable and fast internet connection.\n- Static IP.\n\nHardware requirements might change with time, but this setup is probably over provisioned in the current state. We’re planning to do a stress testing benchmark suite to understand and predict the behavior of the validator under different loads to have more data about potential future recommended system requirements.\n\n### Firewall configuration\n\nIf you’re behind a firewall, you should open ports `:8080` or `:443`, depending on if you run with TLS certificates. By default, TLS is not required, thus, expecting `:8080` to be open to the external world.\n\n### System prerequisites\n\nThere are two prerequisites for running a validator:\n\n- Install host-level dependencies.\n- Get EVM node API keys.\n\nTableland has two separate networks:\n\n- `mainnet`: this network syncs mainnet EVM chains (e.g., Ethereum mainnet, Arbitrum mainnet, etc.).\n- `testnet`: this network is syncing testnet EVM chains (e.g., Ethereum Sepolia, Arbitrum Sepolia, etc.).\n\nThis guide will focus on running the validator in the `mainnet` network.\n\nWe do this for two reasons:\n\n- The `mainnet` network is the most stable one and is also where we want the most number of validators.\n- We can provide concrete file paths related to `mainnet` and avoid being abstract.\n\nWe’ll also explain how to run a validator using Alchemy as a provider for the EVM node API the validator will use. The configuration will be analogous if you use self-hosted nodes or other providers. Note that if you _do_ want to support testnets, you can, generally, replace this documentation's `mainnet` reference with `testnet` (e.g., an environment variable with `MAINNET` would be `TESTNET`; `docker/deployed/mainnet` would shift to `docker/deployed/testnet`).\n\n#### Install host-level dependencies\n\nTo run the provided docker-compose setup, you’ll need to have installed:\n\n- `git`: [Installation guide](https://github.com/git-guides/install-git).\n- `docker` with the Compose plugin: [The default Docker engine installation](https://docs.docker.com/engine/install/) already includes the Compose plugin (i.e., `docker compose` command).\n\nNote that there’s no need for a particular `Go` installation since binaries are compiled within a docker container containing the correct Go compiler versions. Despite not being strictly necessary, creating a separate user in the host is usually recommended to run the validator.\n\n#### Create EVM node API keys\n\nThe current setup needs one API key per supported chain. The default setup expects Alchemy keys for the following: Ethereum, Optimism, Arbitrum One, and Polygon; QuickNode for Arbitrum Nova. But, you are free to use a self-hosted node or another provider that supports the targeted chains.\n\nTo get your Alchemy keys, create an [Alchemy](https://alchemy.com) account, log in, and follow these steps:\n\n1. Create one app for each chain using the `+ Create App` button.\n2. You’ll see one row per chain—click the `View Key` button and copy/save the `API KEY`.\n\nTo get your QuickNode Arbitrum Nova key, create a [QuickNode](https://quicknode.com) account, log in, and follow these steps:\n\n1. Create an endpoint.\n2. Select Arbitrum Nova Mainnet.\n3. When you finish the wizard, you should be able to have access to your API key.\n\n\u003e Note: For Filecoin, we recommend [Glif.io](https://api.calibration.node.glif.io/rpc/v1) RPC support, which does not require authentication; the `.env` variable's value (shown below) can be left empty.\n\n### Run the validator\n\nNow that you have installed the host-level dependencies, have one wallet per chain, and provider (Alchemy, QuickNode, etc.) API keys, you’re ready to configure the validator and run it.\n\n#### 1. Clone the `go-tableland` repository\n\nNavigate to the folder where you want to clone the repository and run:\n\n```sh\ngit clone https://github.com/tablelandnetwork/go-tableland.git\n```\n\n\u003e Running the `main` branch should always be safe since it’s the exact code that the public validator is running. We recommend this approach since we’re moving quickly with features and improvements but expect soon to be better guided by official releases.\n\n#### 2. Configure your secrets in `.env` files\n\nYou must configure each EVM account's private keys and EVM node provider API keys into the validator secrets:\n\n1. Create a `.env_validator` file in `docker/deployed/mainnet/api` folder—an example is provided with `.env_validator.example`.\n2. Add the following to `.env_validator` (as noted, this focuses on mainnet configurations but could be generally replicated for testnet support):\n\n   ```txt\n   VALIDATOR_ALCHEMY_ETHEREUM_MAINNET_API_KEY=\u003cyour ethereum mainnet alchemy key\u003e\n   VALIDATOR_ALCHEMY_OPTIMISM_MAINNET_API_KEY=\u003cyour optimism mainnet alchemy key\u003e\n   VALIDATOR_ALCHEMY_ARBITRUM_MAINNET_API_KEY=\u003cyour arbitrum mainnet alchemy key\u003e\n   VALIDATOR_ALCHEMY_POLYGON_MAINNET_API_KEY=\u003cyour polygon mainnet alchemy key\u003e\n   VALIDATOR_QUICKNODE_ARBITRUM_NOVA_MAINNET_API_KEY=\u003cyour arbitrum nova mainnet quicknode key\u003e\n   VALIDATOR_GLIF_FILECOIN_MAINNET_API_KEY=\n   ```\n\n   \u003e Note: there is also an optional `METRICS_HUB_API_KEY` variable; this can be left empty. It's a service (`cmd/metricshub`) that aggregates metrics like `git summary` and pushes them to centralized infrastructure ([GCP Cloud Run](https://cloud.google.com/run)) managed by the core team. If you'd like to have your validator push metrics to this hub, please reach out to the Tableland team, and we may make it available to you. However, this process will further be decentralized in the future and remove this dependency entirely.\n\n3. Tune the `docker/deployed/mainnet/api/config.json` :\n\n   1. Change the `ExternalURIPrefix` configuration attribute into the DNS (or IP) where your validator will be serving external requests.\n   2. In the `Chains` section, only leave the chains you’ll be running; remove any chain entries you do not wish to support.\n\n      \u003cdetails\u003e \n        \u003csummary\u003eReference: example entry\u003c/summary\u003e\n\n      ```json\n      {\n        \"Name\": \"Ethereum Mainnet\",\n        \"ChainID\": 1,\n        \"Registry\": {\n          \"EthEndpoint\": \"wss://eth-mainnet.g.alchemy.com/v2/${VALIDATOR_ALCHEMY_ETHEREUM_MAINNET_API_KEY}\",\n          \"ContractAddress\": \"0x012969f7e3439a9B04025b5a049EB9BAD82A8C12\"\n        },\n        \"EventFeed\": {\n          \"ChainAPIBackoff\": \"15s\",\n          \"NewBlockPollFreq\": \"10s\",\n          \"MinBlockDepth\": 1,\n          \"PersistEvents\": true\n        },\n        \"EventProcessor\": {\n          \"BlockFailedExecutionBackoff\": \"10s\",\n          \"DedupExecutedTxns\": true,\n          \"WebhookURL\": \"https://discord.com/api/webhooks/${VALIDATOR_DISCORD_WEBHOOK_ID}/${VALIDATOR_DISCORD_WEBHOOK_TOKEN}\"\n        },\n        \"HashCalculationStep\": 150\n      }\n      ```\n\n      \u003c/details\u003e\n\n4. Create a `.env_grafana` file in the `docker/deployed/mainnet/grafana` folder—an example is provided with `.env_grafana.example`.\n5. Add the following to `.env_grafana`:\n\n```txt\nGF_SECURITY_ADMIN_USER=\u003cuser name you'd like to login intro grafana\u003e\nGF_SECURITY_ADMIN_PASSWORD=\u003cpassword of the user\u003e\n```\n\n\u003e Note: the `GF_SERVER_ROOT_URL` variable is optional and can be left empty. By default, Grafana is hosted locally at `http://localhost:3000`.\n\nThat’s it...your validator is now configured!\n\nIt's worthwhile to review the `config.json` file to see how the environment variables configured in the `.env` files inject these secrets into the validator configuration. Also, note how supporting more chains only requires adding an extra entry in the `Chains`, so it's straightforward to add support for any of the supported `testnets` of each `mainnet` chain. Note that adding a _new_ `mainnet` chain that's not yet supported by the network is not possible as this requires the core Tableland protocol to separately deploy a `Registry` smart contract in order to enable new chain support. This is performed on a case-by-case basis, so please reach out to the Tableland team if you'd like support for a new `mainnet` chain.\n\n#### 3. Run the validator\n\nTo run the validator, move to the `docker` folder and run the following:\n\n```sh\nmake mainnet-up\n```\n\nSome general comments and tips:\n\n- The first time you run this, it can take some time since you’ll have a cold cache regarding images and dependencies in Docker; subsequent runs will be quite fast.\n- You can inspect the general health of containers with `docker ps`.\n- You can tail the logs with `docker logs docker-api-1 -f`.\n- You can tear down the stack with `make mainnet-down`.\n\n\u003e The default docker-compose setup has a baked-in observability substack with Prometheus and Grafana. You can learn more about this in the next section.\n\nWhile the validator is syncing, you might see the logs are generated rather quickly. In the `docker/deployed/mainnet/api/database.db`, you should expect that the SQLite database will start to grow in size.\n\n### Docker Compose setup\n\nThe docker-compose setup can feel a bit magical, so in this section, we’ll explain the setup's folder structure and important considerations. Remember that you don’t need to understand this section to run a validator, but knowing how things work is highly recommended.\n\n#### Architecture and port bindings\n\nWhen you run `make mainnet-up`, you’re running the following stack:\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"https://user-images.githubusercontent.com/13358940/249326369-b94c12b9-6550-49c9-92b3-5e662c8bcd72.png\" width='80%'/\u003e\n\u003c/p\u003e\n\nIf you’re running the validator, you’ll see these four containers running with `docker ps`.\n\nThere’re two main port binding groups:\n\n- `:8080` and `:443` to the `api` container (the validator), depending if you have configured TLS in the validator.\n- `:3000` to the `grafana` container to access the Grafana dashboard. Remember that if you want to access to Grafana from the external world, you’ll have to configure your firewall.\n\nRegarding the containers:\n\n- `api` is the container running the validator.\n- `healthbot` is an optional container to have an e2e daemon checking the healthiness of the full write-query transaction and events execution. More about this in the [Healthbot section](https://www.notion.so/Validator-documentation-9f0cc2abf424410c8659fa939ed5095e?pvs=21).\n- `grafana` and `prometheus` are part of the observability stack, allowing a fully-featured Grafana dashboard that provides useful live information about the validator. There's more information about this in the [Observability section](#observability-stack).\n\n#### Folder structure\n\nThe `docker/deployed/mainnet` folder contains one folder per process that it’s running:\n\n- `api` folder: contains all the relevant secrets, configuration and state of the validator.\n  - `config.json` file: the full configuration file of the validator.\n  - `.env_validator` file: contains secrets that are injected in the `config.json` file.\n  - `database.db*` files: when you run the validator, you’ll see these files, which are the SQLite database of the validator (running in [WAL](https://www.sqlite.org/wal.html) mode).\n- `grafana` and `prometheus` folders: contain any state from these daemons. For example, Grafana can include alerts or settings customizations, and Prometheus has the time-series database, so whenever you reset the container, it will keep historical data.\n- `healthbot` folder: contains secrets and configuration for the healthbot.\n\nFrom an operational point of view, you usually don’t have to touch these folders apart from the `api/config.json` or `api/.env_validator` if you want to change something about the validator configuration or secrets. The Prometheus setup has a default 15 days retention time for the time series data, so the database size should be automatically bounded.\n\n#### Configuration files\n\nThe validator configuration is done via a JSON file located at `deployed/mainnet/api/config.json`.\n\nThis file contains general and chain-specific configuration, such as desired listening ports, gateway configuration, log level configuration, and chain-specific configuration, including name, chain ID, contract address, wallet private keys, and EVM node API endpoints.\n\nThe provided configurations in each `deployed/\u003cenvironment\u003e` already have everything needed for the environment and other recommended values. The environment variable expansion parts of the `config.json` file, such as secrets and other attributes in the `.env_validator` file, were explained in the [secret configuration section](#2-configure-your-secrets-in-env-files) above. For example, the `VALIDATOR_ALCHEMY_ETHEREUM_MAINNET_API_KEY` variable configured in `.env_validator` expands a `${VALIDATOR_ALCHEMY_ETHEREUM_MAINNET_API_KEY}` present in the `config.json` file. If you want to use a self-hosted Ethereum mainnet node API or another provider, you can edit the `config.json` file in the `EthEndpoint` endpoint. This same logic applies to every possible configuration in the validator.\n\n#### Observability stack\n\nAs mentioned earlier, the default docker-compose setup provides a fully configured observability stack by running Prometheus and Grafana.\n\nThis setup configures the scrape endpoints in Prometheus to pull metrics from the validator and data sources dashboard for Grafana. These automatically bound configuration files are in `docker/observability/(grafana|prometheus)` folders. They are not part of the state of the processes. This is intentional so that, for example, the dashboard is part of the `go-tableland` repository, and you’ll get automatic dashboard upgrades while is being improved or extended.\n\nAfter you spin up the validator, you can go to `http://localhost:3000` and access the Grafana setup. Recall that you configured the credentials in the `.env_grafana` file in `docker/deployed/mainnet/grafana`.\n\nIf you browse the existing dashboards, you should see an existing _Validator_ dashboard that should look like the following, which aggregates all metrics that the validator generates:\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"https://user-images.githubusercontent.com/13358940/249328422-3d727309-42b8-4ffc-9f2f-bcfed6b5d398.png\" width='80%'/\u003e\n\u003c/p\u003e\n\n#### Healthbot (optional)\n\nThe `healthbot` daemon is an optional feature of the docker-compose stack and is _only_ needed if you support a testnet network; it's disabled by default.\n\nThe main goal of `healthbot` is to test e2e in order to see if the validator is running correctly:\n\n- For every configured chain, it executes a write statement to Tableland smart contract to increase a counter value in a pre-minted table that is owned by the validator.\n- It waits to see if the increased counter in the target table was materialized in the table, thus, signaling that:\n  - The transaction with the `UPDATE` statement was correctly sent to the chain.\n  - The transaction was correctly minted in the target blockchain.\n  - The event for that `UPDATE` was detected and processed by the validator\n  - A `SELECT` statement reading that table should read the increased counter in the target table.\n\nIn short, it tests most of the processing healthiness of the validator. For each of the target chains, you should mint a table with the following statement:\n\n```sql\nCREATE TABLE healthbot_{chainID} (counter INTEGER);\n```\n\nThis would result in having four tables—one per chain:\n\n- `healthbot_11155111_{tableID}` (Ethereum Sepolia)\n- `healthbot_11155420_{tableID}` (Optimism Sepolia)\n- `healthbot_421614_{tableID}` (Arbitrum Sepolia)\n- `healthbot_314159_{tableID}` (Filecoin Calibration)\n\nYou should create a file `.env_healthbot` in the `docker/deployed/testnet/healthbot` folder with the following content (an example is provided with `.env_healthbot.example`):\n\n```txt\nHEALTHBOT_ETHEREUM_SEPOLIA_TABLE=healthbot_11155111_{tableID}\nHEALTHBOT_OPTIMISM_SEPOLIA_TABLE=healthbot_11155420_{tableID}\nHEALTHBOT_ARBITRUM_SEPOLIA_TABLE=healthbot_421614_{tableID}\nHEALTHBOT_FILECOIN_CALIBRATION_TABLE=healthbot_314159_{tableID}\n```\n\nFinally, edit the `docker/deployed/testnet/healthbot/config.json` file `Target` attribute with the public DNS where your validator is serving to the external world. This is the endpoint where the healthbot will be making the healthiness probes. Since running the `healthbot` requires custom tables to be minted, it’s disabled by default.\n\nTo enable running the `healthbot`, you should run the following `make testnet-up` with the `HEALTHBOT_ENABLED=true` environment value set:\n\n```sh\nHEALTHBOT_ENABLED=true make testnet-up\n```\n\nAfter a few minutes, you should see the `HealthBot -e2e check` section of the Grafana dashboard populated:\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"https://user-images.githubusercontent.com/13358940/249330376-53afd85e-693b-47d4-b877-9463a10af135.png\" width='80%'/\u003e\n\u003c/p\u003e\n\n#### Pruning docker images (optional)\n\nRemoving old docker images from time to time may be beneficial to avoid unnecessary disk usage. You can set up a `cron` rule to do that automatically. For example, you could do the following:\n\n1. Run `crontab -e`.\n2. Add the rule: `0 0 * * FRI /usr/bin/docker system prune --volumes -f  \u003e\u003e /home/validator/cronrun 2\u003e\u00261`\n\n### Backups and other routines\n\nAll validators are equipped with a backup scheduler that runs a background routine that executes a backup process of the SQLite database file at a configurable regular frequency. Besides the main backup of the database, the `Backuper` process executes a `VACUUM` process in the backup file and compresses it with `zstd`.\n\n#### How the backup process works\n\nThe backup process called `Backuper` takes a backup of `SQLite` database file and stores it in a local directory relative to where the database is stored.\n\nThe process uses the [SQLite Backup API](https://www.sqlite.org/c3ref/backup_finish.html) provided by [mattn/go-sqlite3](https://pkg.go.dev/github.com/mattn/go-sqlite3#SQLiteBackup). It is a full backup in a single step. Right now, the database is small enough not to worry about locking and how long it takes, but an incremental backup approach may be needed when as the database grows in the future.\n\n#### How the scheduler works\n\nThe scheduler ticks at a regular interval defined by the `Frequency` config. It is important to mention that the time it runs is relative to the epoch time. That means, as the validator becomes operational and healthy after a deployment, it will start a backup routine in the next timestamp multiple of `Frequency` relative to epoch. That allows having backup files evenly distributed according to timestamp.\n\n#### Vacuum\n\nAfter the backup is finished, it executes the `VACUUM` SQL statement in the backup database to remove any unused rows and reduce the database file. This process may take a while, but it's expected since there shouldn't be any other connections to the backup database at this point.\n\n#### Compression\n\nAfter **vacuum**, we shrink the database even further by compressing it using the [zstd](http://facebook.github.io/zstd/) algorithm implemented by [compress](https://github.com/klauspost/compress) library.\n\n#### Pruning\n\nWe don't keep all backup files around—at the end, we remove any files exceeding the backup's `KeepFiles` config, located in `cmd/api/config.go`. The default value is `5`.\n\n#### Filename convention\n\nThe backup files follow the pattern: `tbl_backup_{{TIMESTAMP}}.db.zst`. For example, it should resemble the following: `tbl_backup_2022-08-25T20:00:00Z.db.zst`.\n\n#### Decompressing the file\n\nIf you're on Linux or Mac, you should have `unzstd` installed out of the box. For example, run `unzstd tbl_backup_2022-08-25T20:00:00Z.db.zst` (replace with your file name) to decompress the compressed database file.\n\n#### Metrics\n\nWe collect the following metrics from the process through **logs**:\n\n```go\nTimestamp.             time.Time\nElapsedTime            time.Duration\nVacuumElapsedTime      time.Duration\nCompressionElapsedTime time.Duration\nSize                   int64\nSizeAfterVacuum        int64\nSizeAfterCompression   int64\n```\n\nAdditionally, we collect the metric `tableland.backup.last_execution` through **Open Telemetry** and **Prometheus**.\n\n#### Configs\n\nThe backup configuration files are located in the `docker/deployed/mainnet/api/config.json` file. The following is the default configuration:\n\n```json\n\"Backup\" : {\n  \"Enabled\": true,       // enables the backup scheduler to execute backups\n  \"Dir\": \"backups\",      // where backup files are stored relative to db\n  \"Frequency\": 240,      // backup frequency in minutes\n  \"EnableVacuum\": true,\n  \"EnableCompression\": true,\n  \"Pruning\" : {\n    \"Enabled\": true,  // enables pruning\n    \"KeepFiles\": 5    // pruning keeps at most `KeepFiles` backup files\n  }\n}\n```\n\n## Development\n\nGet started by following the validator setup steps described above. From there, you can make changes to the codebase and run the validator locally. For a validator stack against a local Hardhat network, you can run the following from the `docker` folder:\n\n- `make local-up`\n- `make local-down`\n\nFor a validator stack against deployed staging environments, you can run:\n\n- `make staging-up`\n- `make staging-down`\n\n### Configuration\n\nNote that for deployed environments, there are two relevant configuration files in each folder `docker/deployed/\u003cenvironment\u003e`:\n\n- `.env_validator`: allows you to configure environments to fill secrets for the validator, plus, expand variables present in the config file (see the `.env_validator.example` example file).\n- `config.json`: the configuration file for the validator.\n\nBesides that, you may want to configure Grafana's `admin_user` and `admin_password`. To do that, configure the `.env_grafana` file with the values of the expected keys shown in `.env_grafana.example`. This all should have been set up already but is worth noting.\n\n## Contributing\n\nPRs accepted. Feel free to get in touch by:\n\n- Opening an issue.\n- Joining our [Discord server](https://tableland.xyz/discord).\n\nSmall note: If editing the README, please conform to the\n[standard-readme](https://github.com/RichardLitt/standard-readme) specification.\n\n## License\n\nMIT AND Apache-2.0, © 2021-2024 Tableland Network Contributors\n","funding_links":[],"categories":["Relational Databases","Go"],"sub_categories":["Blockchain"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftablelandnetwork%2Fgo-tableland","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ftablelandnetwork%2Fgo-tableland","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftablelandnetwork%2Fgo-tableland/lists"}