{"id":28203980,"url":"https://github.com/gigapi/gigapi","last_synced_at":"2025-10-05T01:47:53.529Z","repository":{"id":152936733,"uuid":"624617502","full_name":"gigapi/gigapi","owner":"gigapi","description":"GigAPI is a Timeseries lakehouse for real-time data and sub-second queries, powered by DuckDB OLAP + Parquet Query Engine, Compactor w/ Cloud-Native Storage. Drop-in FDAP alternative ⭐","archived":false,"fork":false,"pushed_at":"2025-09-16T15:34:23.000Z","size":17595,"stargazers_count":342,"open_issues_count":15,"forks_count":12,"subscribers_count":10,"default_branch":"main","last_synced_at":"2025-09-16T17:03:58.706Z","etag":null,"topics":["api","clickhouse-server","data-lake","database","datalake","duckdb","duckdb-api","duckdb-server","ducklake","fdap","gigapipe","golang","lakehouse","olap","parquet","qryn","query-engine","rest-api","s3","sql"],"latest_commit_sha":null,"homepage":"https://gigapipe.com","language":"Go","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/gigapi.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2023-04-06T21:49:48.000Z","updated_at":"2025-09-15T22:04:09.000Z","dependencies_parsed_at":"2023-09-26T19:16:22.210Z","dependency_job_id":"b634ef78-a680-4bfe-b9a6-eb1d39bee59f","html_url":"https://github.com/gigapi/gigapi","commit_stats":null,"previous_names":["metrico/quackpipe"],"tags_count":69,"template":false,"template_full_name":null,"purl":"pkg:github/gigapi/gigapi","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/gigapi%2Fgigapi","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/gigapi%2Fgigapi/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/gigapi%2Fgigapi/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/gigapi%2Fgigapi/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/gigapi","download_url":"https://codeload.github.com/gigapi/gigapi/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/gigapi%2Fgigapi/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":278399608,"owners_count":25980331,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-10-04T02:00:05.491Z","response_time":63,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["api","clickhouse-server","data-lake","database","datalake","duckdb","duckdb-api","duckdb-server","ducklake","fdap","gigapipe","golang","lakehouse","olap","parquet","qryn","query-engine","rest-api","s3","sql"],"created_at":"2025-05-17T04:00:26.589Z","updated_at":"2025-10-05T01:47:53.524Z","avatar_url":"https://github.com/gigapi.png","language":"Go","readme":"# \u003cimg src=\"https://github.com/user-attachments/assets/5b0a4a37-ecab-4ca6-b955-1a2bbccad0b4\" /\u003e\n\n# \u003cimg src=\"https://github.com/user-attachments/assets/74a1fa93-5e7e-476d-93cb-be565eca4a59\" height=25 /\u003e GigAPI: The Infinite Timeseries Lakehouse\n\nLike a durable parquet floor, GigAPI provides rock-solid data foundation for your queries and analytics\n\n### \u003cimg src=\"https://github.com/user-attachments/assets/a9aa3ebd-9164-476d-aedf-97b817078350\" width=18 /\u003e **Problem**\n\u003e Traditional \"always-on\" OLAP databases such as ClickHouse are fast but expensive to operate, complex to manage and scale, often promoting a cloud product. Data lakes and Lake houses are cheaper but can't always handle real-time ingestion or compaction and querying growing datasets such as timeseries brings back costly operations and complexity. Various _\"opencore\"_ poison solutions out there.\n\n### \u003cimg src=\"https://github.com/user-attachments/assets/a9aa3ebd-9164-476d-aedf-97b817078350\" width=18 /\u003e **Solution**\n\u003e GigAPI is a timeseries optimized \"lakehouse\" designed for realtime data - lots of it - and returning queries as fast as possible. By combining DuckDB's performance, FlightSQL efficiency and Parquet's reliablity with smart metadata we've created a simple, lightweight solution ready to decimate complexity and infrastructure costs for ourselves and others.\n\u003e GigAPI is _100% opensource - no open core or cloud product gimmicks_.\n\n\n\n### \u003cimg src=\"https://github.com/user-attachments/assets/a9aa3ebd-9164-476d-aedf-97b817078350\" width=18 /\u003e GigAPI Features\n\n* Fast: DuckDB SQL + Parquet powered OLAP API Engine\n* Flexible: Schema-less Parquet Ingestion \u0026 Compaction\n* Simple: Low Maintenance, Portable Catalog, Infinitely Scalable\n* Smart: Independent storage/write and compute/read components\n* Extensible: Built-In Query Engine _(DuckDB)_ or BYODB _(ClickHouse, Datafusion, etc)_\n\n\u003e [!WARNING]  \n\u003e GigAPI is an open beta developed in public. Bugs and changes should be expected. Use at your own risk.\n\n\n## \u003cimg src=\"https://github.com/user-attachments/assets/74a1fa93-5e7e-476d-93cb-be565eca4a59\" height=20 /\u003e Usage\n\n\u003e Here's the most basic example. For more complex usage samples see the [examples](/examples) directory\n```yml\nservices:\n  gigapi:\n    image: ghcr.io/gigapi/gigapi:latest\n    container_name: gigapi\n    hostname: gigapi\n    restart: unless-stopped\n    volumes:\n      - ./data:/data\n    ports:\n      - \"7971:7971\"\n    environment:\n      - GIGAPI_ROOT=/data\n      - GIGAPI_LAYERS_0_NAME=default\n      - GIGAPI_LAYERS_0_TYPE=fs\n      - GIGAPI_LAYERS_0_URL=file:///data\n```\n### \u003cimg src=\"https://github.com/user-attachments/assets/a9aa3ebd-9164-476d-aedf-97b817078350\" width=18 /\u003e Settings\n\n| Env Var Name               | Description                                                         | Default Value |\n|----------------------------|---------------------------------------------------------------------|---------------|\n| `GIGAPI_ROOT`              | Root folder for all the data files                                  |               |\n| `GIGAPI_MERGE_TIMEOUT_S`   | Base timeout between merges (in seconds)                            | `10`          |\n| `GIGAPI_SAVE_TIMEOUT_S`    | Timeout before saving the new data to the disk (in seconds)         | `1`           |\n| `GIGAPI_NO_MERGES`         | Disable merging                                                     | `false`       |\n| `GIGAPI_UI`                | Enable UI for querier                                               | `true`        |\n| `GIGAPI_MODE`              | Execution mode (`readonly`, `writeonly`, `compaction`, `aio`)       | `\"aio\"`       |\n| `GIGAPI_METADATA_TYPE`     | Metadata Type (`json` for local, `redis` for distributed)           | `\"json\"`      |\n| `GIGAPI_METADATA_URL`      | Metadata Type URL for redis (ie: `redis://redis:6379/0`             |               |\n| `HTTP_PORT`                | Port to listen on for HTTP server                                   | `7971`        |\n| `HTTP_HOST`                | Host to bind to for HTTP server                                     | `\"0.0.0.0\"`   |\n| `HTTP_BASIC_AUTH_USERNAME` | Username for HTTP basic authentication                              |               |\n| `HTTP_BASIC_AUTH_PASSWORD` | Password for HTTP basic authentication                              |               |\n| `FLIGHTSQL_PORT`           | Port to run FlightSQL server                                        | `8082`        |\n| `FLIGHTSQL_ENABLE`         | Enable FlightSQL server                                             | `true`        |\n| `LOGLEVEL`                 | Log level (debug, info, warn, error, fatal)                         | `\"info\"`      |\n| `DUCKDB_MEM_LIMIT`         | DuckDB memory limit (e.g. 1GB)                                      | `\"1GB\"`       |\n| `DUCKDB_THREAD_LIMIT`      | DuckDB thread limit (int)                                           | `1`           |\n| `GIGAPI_LAYER_X_NAME`      | X - layer index from 0. Layer unique name.                          |               |\n| `GIGAPI_LAYER_X_TYPE`      | `fs` for file system, `s3` for s3                                   |               |\n| `GIGAPI_LAYER_X_GLOBAL`    | `true` if all the cluster has an access to the layer                |               |\n| `GIGAPI_LAYER_X_URL`       | path or url to s3                                                   |               |\n| `GIGAPI_LAYER_X_TTL`       | timeout before send data to the next layer or drop it 0 for no drop | `0`           |\n\n\u003e You can override the defaults by setting these environment variables before starting the service.\n\n\u003cbr\u003e\n\n## \u003cimg src=\"https://github.com/user-attachments/assets/74a1fa93-5e7e-476d-93cb-be565eca4a59\" height=20 /\u003e Write Support\nAs write requests come in to GigAPI they are parsed and progressively appeanded to parquet files alongside their metadata. The ingestion buffer is flushed to disk at configurable intervals using a hive partitioning schema. Generated parquet files and their respective metadata are progressively compacted and sorted over time based on configuration parameters.\n\n### \u003cimg src=\"https://github.com/user-attachments/assets/a9aa3ebd-9164-476d-aedf-97b817078350\" width=18 /\u003e API\nGigAPI provides an HTTP API for clients to write, currently supporting the InfluxDB Line Protocol format \n\n```bash\ncat \u003c\u003cEOF | curl -X POST \"http://localhost:7971/write?db=mydb\" --data-binary @/dev/stdin\nweather,location=us-midwest,season=summer temperature=82\nweather,location=us-east,season=summer temperature=80\nweather,location=us-west,season=summer temperature=99\nEOF\n```\n\n### \u003cimg src=\"https://github.com/user-attachments/assets/a9aa3ebd-9164-476d-aedf-97b817078350\" width=18 /\u003e FlightSQL\n\n\u003e [!NOTE]\n\u003e _FlightSQL ingestion is coming soon!_\n\n### \u003cimg src=\"https://github.com/user-attachments/assets/a9aa3ebd-9164-476d-aedf-97b817078350\" width=18 /\u003e Data Schema\nGigAPI is a schema-on-write database managing databases, tables and schemas on the fly. New columns can be added or removed over time, leaving reconciliation up to readers.\n\n```bash\n/data\n  /mydb\n    /weather\n      /date=2025-04-10\n        /hour=14\n          *.parquet\n          metadata.json\n        /hour=15\n          *.parquet\n          metadata.json\n```\n\nGigAPI managed parquet files use the following naming schema:\n```\n{UUID}.{LEVEL}.parquet\n```\n\n### \u003cimg src=\"https://github.com/user-attachments/assets/a9aa3ebd-9164-476d-aedf-97b817078350\" width=18 /\u003e Parquet Compactor\nGigAPI files are progressively compacted based on the following logic _(subject to future changes)_\n\n\n| Merge Level   | Source | Target | Frequency              | Max Size |\n|---------------|--------|--------|------------------------|----------|\n| Level 1 -\u003e 2  | `.1`   | `.2`   | `MERGE_TIMEOUT_S` = `10` | 100 MB   |\n| Level 2 -\u003e 3  | `.2`   | `.3`   | `MERGE_TIMEOUT_S` * `10` | 400 MB   |\n| Level 3 -\u003e 4  | `.3`   | `.3`   | `MERGE_TIMEOUT_S` * `10` * `10` | 4 GB     |\n\n\n\n## \u003cimg src=\"https://github.com/user-attachments/assets/74a1fa93-5e7e-476d-93cb-be565eca4a59\" height=20 /\u003e Read Support\nAs read requests come in to GigAPI they are parsed and transpiled using the GigAPI Metadata catalog to resolve data location based on database, table and timerange in requests. Series can be used with or without time ranges, ie for calculating averages, etc.\n\nQuery Data\n```bash\n$ curl -X POST \"http://localhost:7972/query?db=mydb\" \\\n  -H \"Content-Type: application/json\"  \\\n  -d {\"query\": \"SELECT time, temperature FROM weather WHERE time \u003e= epoch_ns('2025-04-24T00:00:00'::TIMESTAMP)\"}\n```\n\nSeries can be used with or without time ranges, ie for counting, calculating averages, etc.\n\n```bash\n$ curl -X POST \"http://localhost:7972/query?db=mydb\" \\\n  -H \"Content-Type: application/json\"  \\\n  -d '{\"query\": \"SELECT count(*), avg(temperature) FROM weather\"}'\n```\n```json\n{\"results\":[{\"avg(temperature)\":87.025,\"count_star()\":\"40\"}]}\n```\n\n#### \u003cimg src=\"https://github.com/user-attachments/assets/a9aa3ebd-9164-476d-aedf-97b817078350\" width=24 /\u003e FlightSQL\nGigAPI data can be accessed using FlightSQL GRPC clients in any language\n```python\nfrom flightsql import connect, FlightSQLClient\nclient = FlightSQLClient(host='localhost',port=8082,insecure=True,metadata={'bucket':'hep'})\nconn = connect(client)\ncursor = conn.cursor()\ncursor.execute('SELECT count(*), avg(temperature) FROM weather')\nprint(\"rows:\", [r for r in cursor])\n```\n\n#### \u003cimg src=\"https://github.com/user-attachments/assets/a9aa3ebd-9164-476d-aedf-97b817078350\" width=24 /\u003e GigAPI UI\nThe embedded GigAPI UI can be used to explore and query data using SQL with advanced features\n\n![gigapi_preview](https://github.com/user-attachments/assets/8d550803-daa3-43dc-a4b3-b0779498fce5)\n\n\n#### \u003cimg src=\"https://github.com/user-attachments/assets/a9aa3ebd-9164-476d-aedf-97b817078350\" width=24 /\u003e Grafana\nGigAPI can be used from Grafana using the InfluxDB3 Flight GRPC Datasource\n\n![image](https://github.com/user-attachments/assets/a7849ff4-b8f6-433b-8458-1c47394c5e5f)\n\n\u003e GigAPI readers can be implemented in any language and with any OLAP engine supporting Parquet files.\n\n\u003cbr\u003e\n\n#### Layer support\n\nGigAPI employs a \"data layer\" concept for efficient data storage and management. \nA \"data layer\" represents a storage location, which can be either a file system or an \nS3 bucket, where data is stored for a specified duration. \nData within a layer undergoes merging operations and can be transferred between layers based on \nTime-to-Live (TTL) configurations.\n\n##### Layers configuration\n\nLayer configuration should be consistent across all readers and writers in the cluster. \nLayer names and paths must be identical throughout the cluster to ensure proper data access and management.\n\nThe metadata, stored either in JSON format or Redis, contains only the layer name. \nEach reader and writer determines the path to the parquet file based on this layer name.\n\n#### Layer Configuration Breakdown\n\nFor each layer, the following parameters can be configured:\n\n- `NAME`: A unique identifier for the layer.\n- `TYPE`: The storage type (`fs` for file system, `s3` for S3 bucket).\n- `URL`: The path or URL to the storage location.\n- `GLOBAL`: Boolean indicating if the layer is accessible to all cluster nodes.\n- `TTL`: Time-to-Live duration before data moves to the next layer (use `0` for no expiration).\n\nHere's an example of layer configuration using environment variables:\n```bash\n# Local Storage, Fastest, 30 minutes TTL\nGIGAPI_LAYERS_0_NAME=cache\nGIGAPI_LAYERS_0_TYPE=fs\nGIGAPI_LAYERS_0_URL=file:///data\nGIGAPI_LAYERS_0_GLOBAL=false\nGIGAPI_LAYERS_0_TTL=30m\n\n# Remote Layer 1, Fast-enough, 4 weeks TTL\nGIGAPI_LAYERS_1_NAME=s3\nGIGAPI_LAYERS_1_TYPE=s3\nGIGAPI_LAYERS_1_URL=s3://s3.server.hostname/bucket/prefix/to/layer\nGIGAPI_LAYERS_1_AUTH_KEY=s3_api_key\nGIGAPI_LAYERS_1_AUTH_SECRET=s3_api_secret\nGIGAPI_LAYERS_1_GLOBAL=true\nGIGAPI_LAYERS_1_TTL=4w\n\n# Remote Layer 2, Slower, forever TTL\nGIGAPI_LAYERS_2_NAME=r2\nGIGAPI_LAYERS_2_TYPE=s3\nGIGAPI_LAYERS_2_URL=s3://r2.server.hostname/bucket/prefix/to/layer\nGIGAPI_LAYERS_2_AUTH_KEY=cloudflare_key\nGIGAPI_LAYERS_2_AUTH_SECRET=clourflare_secret\nGIGAPI_LAYERS_2_GLOBAL=true\nGIGAPI_LAYERS_2_TTL=0\n```\n\nIn this configuration:\n\n1. The first layer (`GIGAPI_LAYERS_0_*`) is a local cache:\n    - It uses the file system (`fs`) as the storage type.\n    - Data is stored locally and is not globally accessible (`GLOBAL=false`).\n    - Data remains in this layer for 10 seconds before moving to the next layer (`TTL=10s`).\n\n2. The second layer (`GIGAPI_LAYERS_1_*`) is an S3 bucket:\n    - It uses S3 as the storage type.\n    - Data is globally accessible to all cluster nodes (`GLOBAL=true`).\n    - Data remains in this layer indefinitely (`TTL=0`).\n\n## \u003cimg src=\"https://github.com/user-attachments/assets/74a1fa93-5e7e-476d-93cb-be565eca4a59\" height=20 /\u003e S3 Configuration\n\nGigAPI supports S3-compatible storage for data layers. The S3 URL format is as follows:\n\n```\ns3://[endpoint_url]/[bucket]/[path/to/base]?[parameters]\n```\n\nThe access key and secret key are provided in separate env variables:\n- `GIGAPI_LAYERS_[X]_AUTH_KEY=api_key` - for access key\n- `GIGAPI_LAYERS_[X]_AUTH_SECRET=api_secret` - for secret key\n\n### URL Components:\n\n- `endpoint_url`: The S3 endpoint URL (e.g., `s3.amazonaws.com` for AWS S3)\n- `bucket`: Your S3 bucket name\n- `path/to/base`: Optional path prefix within the bucket\n\n### URL Parameters:\n\n| Parameter | Description                                                                   | Default |\n|-----------|-------------------------------------------------------------------------------|---------|\n| secure    | Whether to use SSL. Set to `true` for most cases, `false` for local testing   | true    |\n| url-style | S3 URL style. Use `vhost` for AWS S3, `path` for most other S3 implementations| vhost   |\n\n### Examples:\n\n1. AWS S3:\n```\nGIGAPI_LAYERS_X_URL=s3://s3.amazonaws.com/my-bucket/data\nGIGAPI_LAYERS_X_AUTH_KEY=EXAMPLE_SECRET\nGIGAPI_LAYERS_X_AUTH_SECRET=EXAMPLE_KEY\n```\n\n2. Local MinIO server:\n```\nGIGAPI_LAYERS_X_URL=s3://localhost:9000/gigapi?secure=false\u0026url-style=path\nGIGAPI_LAYERS_X_AUTH_KEY=minioadmin\nGIGAPI_LAYERS_X_AUTH_SECRET=minioadmin\n\n```\n\n3. DigitalOcean Spaces:\n```\nGIGAPI_LAYERS_X_URL=s3://nyc3.digitaloceanspaces.com/my-space/data?url-style=path\nGIGAPI_LAYERS_X_AUTH_KEY=EXAMPLE_KEY\nGIGAPI_LAYERS_X_AUTH_SECRET=EXAMPLE_SECRET\n```\n\n### Security Considerations:\n\n1. Always use `secure=true` in production environments to ensure encrypted connections.\n2. Protect your access and secret keys. Consider using environment variables or a secrets management system instead of hardcoding them in the URL.\n3. Use IAM roles and policies (for AWS) or equivalent access control mechanisms to limit permissions to the minimum necessary.\n\n### Troubleshooting:\n\n- If you encounter \"Access Denied\" errors, double-check your access key, secret key, and bucket permissions.\n- For connection issues, verify the endpoint URL and ensure proper network access.\n- When using non-AWS S3 implementations, you may need to set `url-style=path`.\n\n\u003e Note: Always refer to your specific S3 provider's documentation for any provider-specific configurations or limitations.\n\n## \u003cimg src=\"https://github.com/user-attachments/assets/74a1fa93-5e7e-476d-93cb-be565eca4a59\" height=20 /\u003e  GigAPI Diagram\n\n```mermaid\n%%{\n  init: {\n    'theme': 'base',\n    'themeVariables': {\n      'primaryColor': '#6a329f',\n      'primaryTextColor': '#fff',\n      'primaryBorderColor': '#7C0000',\n      'lineColor': '#6f329f',\n      'secondaryColor': '#006100',\n      'tertiaryColor': '#fff'\n    }\n  }\n}%%\ngraph TD\n    subgraph \"GigAPI System\"\n        HTTP[\"HTTP API\"] --\u003e DataIngestion[\"Data Ingestion Pipeline\"]\n        GRPC[\"GRPC API\"] --\u003e FlightSQL[\"FlightSQL Service\"]\n\n        Configuration[\"Metadata Store\"] --\u003e Storage\n        Configuration --\u003e DataIngestion\n        Configuration --\u003e Storage\n        Configuration --\u003e MergeProcess\n        MergeProcess --\u003e Configuration\n\n        FlightSQL[\"FlightSQL Service\"] --\u003e Storage[\"Storage System\"]\n        FlightSQL[\"FlightSQL Service\"] --\u003e DuckDB[\"DuckDB Engine\"]\n\n        DataIngestion --\u003e Storage[\"Storage System\"]\n        Storage --\u003e MergeProcess[\"Merge Process\"]\n        Storage --\u003e QueryEngine[\"Query Engine\"]\n\n        DuckDB[\"DuckDB Engine\"] --\u003e Configuration\n        \n        \n    end\n    \n    Client[\"Client Applications\"] --\u003e HTTP\n    Client[\"Client Applications\"] --\u003e GRPC\n    \n    Storage --\u003e LocalFS[\"Local Filesystem\"]\n    Storage --\u003e S3[\"S3 Storage\"]\n    \n    QueryEngine --\u003e DuckDB[\"DuckDB Engine\"]    \n    FlightSQL[\"FlightSQL Service\"] --\u003e Configuration\n```\n\n\n### Got Questions?\n[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/gigapi/gigapi)\n\n\n### Contributors\n\n\u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;[![Contributors @metrico/quackpipe](https://contrib.rocks/image?repo=gigapi/gigapi)](https://github.com/gigapi/gigapi/graphs/contributors)\n\n### Community\n\n[![Stargazers for @metrico/quackpipe](https://reporoster.com/stars/gigapi/gigapi)](https://github.com/gigapi/gigapi/stargazers)\n\n###### :black_joker: Disclaimers \n\n[^1]: DuckDB ® is a trademark of DuckDB Foundation. All rights reserved by their respective owners. [^1]\n[^2]: ClickHouse ® is a trademark of ClickHouse Inc. No direct affiliation or endorsement. [^2]\n[^3]: InfluxDB ® is a trademark of InfluxData. No direct affiliation or endorsement. [^3]\n[^4]: Released under the MIT license. See LICENSE for details. All rights reserved by their respective owners. [^4]\n\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fgigapi%2Fgigapi","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fgigapi%2Fgigapi","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fgigapi%2Fgigapi/lists"}