https://github.com/optimism-java/blob-archiver-rs
This is a Rust implementation of the Beacon Chain blob archiver
https://github.com/optimism-java/blob-archiver-rs
blob eip-4844 eth2-beacon-chain ethereum rust rust-lang
Last synced: 2 months ago
JSON representation
This is a Rust implementation of the Beacon Chain blob archiver
- Host: GitHub
- URL: https://github.com/optimism-java/blob-archiver-rs
- Owner: optimism-java
- License: mit
- Created: 2024-03-19T06:59:50.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2025-03-06T03:27:53.000Z (4 months ago)
- Last Synced: 2025-04-01T13:37:59.654Z (3 months ago)
- Topics: blob, eip-4844, eth2-beacon-chain, ethereum, rust, rust-lang
- Language: Rust
- Homepage: https://optimism-java.github.io/blob-archiver-rs-docs
- Size: 238 KB
- Stars: 1
- Watchers: 2
- Forks: 1
- Open Issues: 2
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# blob-archiver-rs
This is a Rust implementation of
the [Beacon Chain blob archiver](https://github.com/base/blob-archiver)The Blob Archiver is a service to archive and allow querying of all historical blobs from the beacon chain. It consists
of two components:* **Archiver** - Tracks the beacon chain and writes blobs to a storage backend
* **API** - Implements the blob sidecars [API](https://ethereum.github.io/beacon-APIs/#/Beacon/getBlobSidecars), which
allows clients to retrieve blobs from the storage backend### Storage
There are currently two supported storage options:* On-disk storage - Blobs are written to disk in a directory
* S3 storage - Blobs are written to an S3 bucket (or compatible service)You can control which storage backend is used by setting the `STORAGE_TYPE` to
either `file` or `s3`.The `s3` backend will also work with (for example) Google Cloud Storage buckets (instructions [here](https://medium.com/google-cloud/using-google-cloud-storage-with-minio-object-storage-c994fe4aab6b)).
### Development
```sh
# Run the tests
cargo test --workspace --all-features --all-targets --locked# Lint the project
cargo clippy --workspace --all-targets --all-features -- -D warnings# Build the project
cargo build --workspace --all-targets --all-features```
#### Run Locally
To run the project locally, you should first copy `.env.template` to `.env` and then modify the environment variables
to your beacon client and storage backend of choice. Then you can run the project with:```sh
docker compose up
```#### Get blobs
After successfully starting the service, you can use the following command to obtain the blob:- get blob by block id from api service:
```shell
# also there is support other type of block id, like: finalized,justified.
curl -X 'GET' 'http://localhost:8000/eth/v1/beacon/blob_sidecars/head' -H 'accept: application/json'
```- get blob by slot number from api service:
```shell
curl -X 'GET' 'http://localhost:8000/eth/v1/beacon/blob_sidecars/7111008' -H 'accept: application/json'
```#### Storage Dashboard
MinIO has started the dashboard, allowing you to view the status of blob storage.
By default, you can access it directly at:
```http
http://localhost:9999
```## Options
### `verbose`
```shell
--verbose=
``````shell
--verbose=2
```### `log_dir`
```shell
--log_dir=
``````shell
--log_dir=/var/log/blob-archiver
```### `log_rotation`
```shell
--log_rotation=
``````shell
--log_rotation=DAILY
```### `beacon_endpoint`
```shell
--beacon_endpoint=
``````shell
--beacon_endpoint=http://localhost:5052
```### `beacon_client_timeout`
```shell
--beacon_client_timeout=
``````shell
--beacon_client_timeout=10
```### `poll_interval`
```shell
--poll_interval=
``````shell
--poll_interval=6
```### `listen_addr`
```shell
--listen_addr=
``````shell
--listen_addr=0.0.0.0:8000
```### `origin_block`
```shell
--origin_block=
``````shell
--origin_block="0x0"
```### `storage_type`
```shell
--storage_type=
``````shell
--storage_type="s3"
```### `s3_endpoint`
```shell
--s3_endpoint=
``````shell
--s3_endpoint="http://localhost:9000"
```### `s3_bucket`
```shell
--s3_bucket=
``````shell
--s3_bucket="blobs"
```### `s3_path`
```shell
--s3_path=
``````shell
--s3_path=/blobs
```### `s3_compress`
```shell
--s3_compress=
``````shell
--s3_compress=false
```### `fs_dir`
```shell
--fs_dir=
``````shell
--fs_dir=/blobs
```