{"id":25451466,"url":"https://github.com/usagi-coffee/pg_chainsync","last_synced_at":"2025-10-14T06:02:54.972Z","repository":{"id":155244726,"uuid":"621530677","full_name":"usagi-coffee/pg_chainsync","owner":"usagi-coffee","description":"Access blockchain inside PostgreSQL","archived":false,"fork":false,"pushed_at":"2025-06-26T23:27:47.000Z","size":1053,"stargazers_count":8,"open_issues_count":0,"forks_count":0,"subscribers_count":2,"default_branch":"master","last_synced_at":"2025-06-27T00:40:54.493Z","etag":null,"topics":["blockchain","database","ethereum","pgrx","postgres","postgres-extension","postgresql","postgresql-extension","rust","sql"],"latest_commit_sha":null,"homepage":"","language":"Rust","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/usagi-coffee.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2023-03-30T21:14:39.000Z","updated_at":"2025-06-26T23:27:50.000Z","dependencies_parsed_at":"2024-03-12T22:26:49.174Z","dependency_job_id":"7ffe028a-f567-4d4b-b772-38928c978c56","html_url":"https://github.com/usagi-coffee/pg_chainsync","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/usagi-coffee/pg_chainsync","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/usagi-coffee%2Fpg_chainsync","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/usagi-coffee%2Fpg_chainsync/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/usagi-coffee%2Fpg_chainsync/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/usagi-coffee%2Fpg_chainsync/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/usagi-coffee","download_url":"https://codeload.github.com/usagi-coffee/pg_chainsync/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/usagi-coffee%2Fpg_chainsync/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":279017959,"owners_count":26086237,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-10-14T02:00:06.444Z","response_time":60,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["blockchain","database","ethereum","pgrx","postgres","postgres-extension","postgresql","postgresql-extension","rust","sql"],"created_at":"2025-02-17T22:53:41.074Z","updated_at":"2025-10-14T06:02:54.966Z","avatar_url":"https://github.com/usagi-coffee.png","language":"Rust","readme":"# pg_chainsync: access blockchain inside PostgreSQL\n\n\u003e Proof of Concept - expect bugs and breaking changes.\n\npg_chainsync adds ability to access blockchain blocks, events and more directly inside your PostgreSQL instance. The extension does not enforce any custom schema for your table and let's you use custom handlers that you adjust for your specific use-case.\n\nThe extension is created with [pgrx](https://github.com/tcdi/pgrx)\n\n## Usage\n\n```sql\nCREATE EXTENSION pg_chainsync;\n```\n\n### Worker lifecycle\n\n```sql\n-- Restart your worker on-demand\nSELECT chainsync.restart();\n\n-- Stops the worker\nSELECT chainsync.stop();\n```\n\n### Watching new blocks\n\n\u003e This scenario assumes there exists blocks table with number and hash column\n\n```sql\n-- This is your custom handler that inserts new blocks to your table\nCREATE FUNCTION custom_block_handler(block chainsync.EvmBlock, job JSONB) RETURNS your_blocks\nAS $$\nINSERT INTO your_blocks (number, hash)\nVALUES (block.number, block.hash)\nRETURNING *\n$$\nLANGUAGE SQL;\n\n-- Register a new job that will watch new blocks\nSELECT chainsync.register(\n  'simple-blocks',\n  '{\n    \"ws\": \"wss://provider-url\",\n    \"evm\": {\n      \"block_handler\": \"custom_block_handler\"\n    }\n  }'::JSONB);\n```\n\nFor the optimal performance your handler function should meet the conditions to be [inlined](https://wiki.postgresql.org/wiki/Inlining_of_SQL_functions).\n\nHere is the complete log output, for the testing the number of fetched blocks has been limited to display the full lifecycle.\n\n![example_output](./extra/usage1.png)\n\n### Watching new events\n\n```sql\n\n-- This is your custom handler that inserts events to your table\nCREATE FUNCTION custom_log_handler(log chainsync.EvmLog, job JSONB) RETURNS your_logs\nAS $$\nINSERT INTO your_logs (address, data) -- Inserting into your custom table\nVALUES (log.address, log.data)\nRETURNING *\n$$\nLANGUAGE SQL;\n\nSELECT chainsync.register(\n  'custom-events',\n  '{\n    \"ws\": \"ws://provider-url\",\n    \"evm\": {\n      \"log_handler\": \"custom_log_handler\",\n      \"address\": \"0x....\",\n      \"event\": \"Transfer(address,address,uint256)\"\n    }\n  }'::JSONB\n);\n\n-- Optional: Restart worker (or entire database)\nSELECT chainsync.restart();\n```\n\n### Oneshot tasks\n\nOneshot Task is a type of job that is designed to run only once or manually triggered.\n\nRunning this query will add a task that will fetch all transfer events for specific contract at address starting from block 12345 and fetching 10000 blocks per call once.\n\n\u003e Hint: Most providers limit the number of events/range of blocks returned from getLogs method so it will just fail, in this case you can use blocktick option that splits fetching into multiple calls, blocktick means range of blocks per call. This does not apply to watching events because they start from latest block.\n\n```sql\nSELECT chainsync.register(\n  'oneshot-task',\n  '{\n    \"ws\": \"ws://provider-url\",\n    \"oneshot\": true,\n    \"evm\": {\n      \"log_handler\": \"custom_log_handler\",\n      \"address\": \"0x....\",\n      \"event\": \"Transfer(address,address,uint256)\",\n      \"from_block\": 12345,\n      \"blocktick\": 10000\n    }\n  }'::JSONB\n);\n\n```\n\n#### Cron tasks\n\nCron tasks are supported, simply add `cron` key to your configuration json.\n\n\u003e Hint: cron expression value should be 6 characters because it supports seconds resolution e.g `0 * * * * *` - will run every minute\n\n```sql\nSELECT chainsync.register(\n  'transfers-every-minute',\n  '{\n    \"ws\": \"wss://provider-url\",\n    \"cron\": \"0 * * * * *\",\n    \"evm\": {\n      \"log_handler\": \"transfer_handler\",\n      \"address\": \"0x....\",\n      \"event\": \"Transfer(address,address,uint256)\",\n      \"from_block\": 0\n    }\n  }'::JSONB\n);\n```\n\n#### Preloaded tasks\n\nSome tasks need to be run when the database starts, for that you can use `preload_events_task`, the created task will run when the extension or the database re/starts.\n\n```sql\nSELECT chainsync.register(\n  'transfers-on-restart',\n  '{\n    \"ws\": \"wss://provider-url\",\n    \"preload\": true,\n    \"evm\": {\n      \"log_handler\": \"transfer_handler\",\n      \"address\": \"0x....\",\n      \"event\": \"Transfer(address,address,uint256)\",\n      \"from_block\": 0\n    }\n  }'::JSONB\n);\n```\n\n#### Handle blocks before events\n\n`await_block` is a feature that allows you to fetch and handle event's block before handling the event. This is helpful when you want to e.g join block inside your event handler, this ensures there is always block available for your specific event when you call your event handler.\n\nYou can optionally skip block fetching and handling if you specify `block_lookup` property which is the name of the function that takes `(block BIGINT, job JSONB)` and returns any value - if it returns any value then it will skip handling this block.\n\n```sql\n-- Look for block in your schemas and return e.g block number\nCREATE FUNCTION find_block(block BIGINT, job JSONB) RETURNS BIGINT\nAS $$\nSELECT block_column FROM your_blocks\nWHERE chain_column = job-\u003e\u003e'your_custom_property' AND block_column = block\nLIMIT 1\n$$ LANGUAGE SQL;\n\nSELECT chainsync.register(\n  'ensure-blocks',\n  '{\n    \"ws\": \"wss://provider-url\",\n    \"evm\": {\n      \"log_handler\": \"transfer_handler\",\n      \"address\": \"0x....\",\n      \"event\": \"Transfer(address,address,uint256)\",\n\n      \"await_block\": true,\n      \"block_skip_lookup\": \"find_block\",\n      \"block_handler\": \"insert_block\",\n    },\n    \"your_custom_property\": 31337\n  }'::JSONB\n);\n\n```\n\n## Installation\n\n```bash\n# Install pgrx\ncargo install --locked cargo-pgrx\n\n# Build the extension\ncargo build --release\n\n# Packaging process should create pg_chainsync-pg.. under target/release\ncargo pgrx package\n\n# NOTICE: your paths may be different because of pg_config... adjust them accordingly to your host/target machine\ncp target/release/pg_chainsync-.../.../pg_chainsync.so /usr/lib/postgresql/\ncp target/release/pg_chainsync-.../.../pg_chainsync--....sql /usr/share/postgresql/extension/\ncp target/release/pg_chainsync-.../.../pg_chainsync.control /usr/share/postgresql/extension/\n```\n\nThis should be enough to be able to use `CREATE EXTENSION pg_chainsync` but we also need to preload our extension because it uses background worker, to preload the extension you need to modify the `postgresql.conf` file and alter `shared_preload_libraries`\n\n```\nshared_preload_libraries = 'pg_chainsync.so' # (change requires restart)\n```\n\nAfter adjusting the config, restart your database and you can check postgres logs to check if it worked!\n\n\u003e Please refer to the pgrx documentation for full details on how to install background worker extension if it does not work for you\n\n## Examples\n\nYou can check out how the extension works in action with `podman compose` (podman) or `docker compose` (docker), you can run the examples using the `dev.sh` script e.g `./dev.sh examples/demo.sql`.\n\n```bash\nbun run demo # Runs examples/demo.sql\n```\n\nCurrently the extension is built on the host machine so keep in mind your paths may vary depending on your `pg_config`, make sure the extension gets built into the correct path, if it's different you need to adjust the volumes in `docker-compose.yml` file, here is how you need to adjust them.\n\n```yaml\n- ./target/release/pg_chainsync-pg17/usr/lib64/pgsql/pg_chainsync.so:/usr/lib/postgresql/17/lib/pg_chainsync.so:z\n- ./target/release/pg_chainsync-pg17/usr/share/pgsql/extension/pg_chainsync.control:/usr/share/postgresql/17/extension/pg_chainsync.control:z\n- ./target/release/pg_chainsync-pg17/usr/share/pgsql/extension/pg_chainsync--0.0.0.sql:/usr/share/postgresql/17/extension/pg_chainsync--0.0.0.sql:z\n```\n\n## Configuration\n\nThe extension is configurable through `postgresql.conf` file, here are the supported keys that you can modify.\n\n| GUC Variable                    | Description                                                     | Default  |\n| ------------------------------- | --------------------------------------------------------------- | -------- |\n| chainsync.database              | Database name the extension will run on                         | postgres |\n| chainsync.evm_ws_permits        | Number of concurrent tasks that can run using the same provider | 1        |\n| chainsync.evm_blocktick_reset   | Number of range fetches before trying to reset after reductions | 1        |\n| chainsync.svm_rpc_permits       | Number of rpc fetches that can run concurrently in a task       | 1        |\n| chainsync.svm_signatures_buffer | Maximum number of signatures to keep in a buffer                | 50000    |\n\n## License\n\n```LICENSE\nMIT License\n\nCopyright (c) Kamil Jakubus and contributors\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n```\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fusagi-coffee%2Fpg_chainsync","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fusagi-coffee%2Fpg_chainsync","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fusagi-coffee%2Fpg_chainsync/lists"}