{"id":37091926,"url":"https://github.com/niltooth/pgbuffer","last_synced_at":"2026-01-14T11:11:48.993Z","repository":{"id":64306111,"uuid":"269368746","full_name":"niltooth/pgbuffer","owner":"niltooth","description":"Buffer data in memory and bulk copy to postgresql concurrently","archived":false,"fork":false,"pushed_at":"2021-01-22T06:38:28.000Z","size":37,"stargazers_count":2,"open_issues_count":1,"forks_count":0,"subscribers_count":1,"default_branch":"master","last_synced_at":"2024-06-20T08:00:43.343Z","etag":null,"topics":["golang","postgresql","sql","timescaledb","timeseries"],"latest_commit_sha":null,"homepage":"","language":"Go","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"gpl-3.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/niltooth.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2020-06-04T13:35:34.000Z","updated_at":"2024-03-17T17:46:41.000Z","dependencies_parsed_at":"2023-01-15T10:45:39.536Z","dependency_job_id":null,"html_url":"https://github.com/niltooth/pgbuffer","commit_stats":null,"previous_names":["dev-mull/pgbuffer"],"tags_count":9,"template":false,"template_full_name":null,"purl":"pkg:github/niltooth/pgbuffer","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/niltooth%2Fpgbuffer","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/niltooth%2Fpgbuffer/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/niltooth%2Fpgbuffer/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/niltooth%2Fpgbuffer/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/niltooth","download_url":"https://codeload.github.com/niltooth/pgbuffer/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/niltooth%2Fpgbuffer/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":28418009,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-01-14T10:47:48.104Z","status":"ssl_error","status_checked_at":"2026-01-14T10:46:19.031Z","response_time":107,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.6:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["golang","postgresql","sql","timescaledb","timeseries"],"created_at":"2026-01-14T11:11:48.355Z","updated_at":"2026-01-14T11:11:48.988Z","avatar_url":"https://github.com/niltooth.png","language":"Go","readme":"# pgbuffer\nBuffer data in memory and bulk copy to postgres. This is especially useful when using the timescaledb postresql extension for timeseries workloads.\nUseful for any append only workloads such as timeseries streams.\n\n# Features\n- Public flush signaling for custom flush handling such as time based, or os signal based.\n- Custom column definition\n- Multi-worker concurrent COPY\n- Utilizes COPY instead of insert for greater performance.\n## Installation\n```shell\ngo get github.com/dev-mull/pgbuffer\n```\n## Basic Usage\n\n```go\n\nimport \t(\n    \"github.com/dev-mull/pgbuffer\"\n    \"database/sql\"\n    _ \"github.com/lib/pq\"\n\n)\n//Setup an optional logger\nlogger := logrus.New()\nlogger.SetOutput(os.Stdin)\n\n//Setup a new buffer config\ncfg := \u0026pgbuffer.Config{\n    Limit: 100,\n    Workers: 2,\n    Logger: logger, \n    Tables: []*pgbuffer.BufferedData{\n    \t{\n    \t\tTable: \"test\",\n    \t\tColumns: []string{\"time\",\"foo\",\"bar\"},\n    \t},\n    },\n}\n//Connect to the db\ndb, err := sql.Open(\"postgres\", dbUrl)\nif err != nil {\n    log.Fatal(err)\n}\n\n//Initialize the buffer\nbuff,err := pgbuffer.NewBuffer(db, cfg)\nif err != nil {\n    log.Fatal(err)\n}\n\n\n//Write some test data every second to the buffer.\n//It will flush after 101 writes because the limit is set to 100\ngo func() {\n    time.Sleep(time.Second * 1)\n    buff.Write(\"test\",time.Now(),\"check\",\"this\")\n}()\n\n//Clean shutdown\nsigs := make(chan os.Signal, 1)\nsignal.Notify(sigs, syscall.SIGINT, syscall.SIGTERM)\ngo func() {\n    \u003c-sigs\n    buff.Stop()\n}()\n\n//Force a flush every minute\ngo func() {\n    t := time.NewTicker(time.Minute)\n    for {\n        select {\n        case \u003c-t.C:\n            buff.FlushAll()\n        }   \n    }\n}()\n\n//Block and run until finished\nbuff.Run()\n\n\n```\n## TODO\n- write statistics handling\n- optional buffer to disk instead of memory\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fniltooth%2Fpgbuffer","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fniltooth%2Fpgbuffer","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fniltooth%2Fpgbuffer/lists"}