{"id":18118184,"url":"https://github.com/kagis/pgwire","last_synced_at":"2025-04-04T20:09:40.699Z","repository":{"id":44680115,"uuid":"163120899","full_name":"kagis/pgwire","owner":"kagis","description":"PostgreSQL client library for Deno and Node.js that exposes all features of wire protocol.","archived":false,"fork":false,"pushed_at":"2024-10-22T16:04:09.000Z","size":439,"stargazers_count":81,"open_issues_count":6,"forks_count":9,"subscribers_count":6,"default_branch":"main","last_synced_at":"2025-03-28T19:08:18.933Z","etag":null,"topics":["libpq","logical-replication","postgres","postgresql","postgresql-driver","streaming-replication"],"latest_commit_sha":null,"homepage":"","language":"JavaScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/kagis.png","metadata":{"files":{"readme":"readme.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2018-12-26T00:57:43.000Z","updated_at":"2025-03-23T19:01:07.000Z","dependencies_parsed_at":"2023-11-17T00:07:41.658Z","dependency_job_id":"2bfd4eea-401c-47d5-99eb-f9215170527d","html_url":"https://github.com/kagis/pgwire","commit_stats":{"total_commits":97,"total_committers":3,"mean_commits":"32.333333333333336","dds":"0.061855670103092786","last_synced_commit":"b992f307097ac5bd350ba41ea4c85d194ccb611f"},"previous_names":[],"tags_count":8,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/kagis%2Fpgwire","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/kagis%2Fpgwire/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/kagis%2Fpgwire/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/kagis%2Fpgwire/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/kagis","download_url":"https://codeload.github.com/kagis/pgwire/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247242678,"owners_count":20907134,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["libpq","logical-replication","postgres","postgresql","postgresql-driver","streaming-replication"],"created_at":"2024-11-01T05:09:15.467Z","updated_at":"2025-04-04T20:09:40.676Z","avatar_url":"https://github.com/kagis.png","language":"JavaScript","readme":"# pgwire is\n\nPostgreSQL client library for Deno and Node.js that exposes all features of wire protocol.\n\n- Memory efficient data streaming\n- Logical replication, including pgoutput protocol\n- Copy from stdin and to stdout\n- Query cancellation\n- Implicit-transaction multi-statement queries\n- Listen/Notify\n- Query pipelining, single round trip\n- Efficient bytea transferring\n- Pure js without dependencies\n\n# Create connection\n\n```js\nimport { pgconnection } from 'https://raw.githubusercontent.com/kagis/pgwire/main/mod.js';\n//                                   use exact commit or tag instead of main ^^^^\nconst pg = pgconnection('postgres://USER:PASSWORD@HOST:PORT/DATABASE');\n```\n\nhttps://www.postgresql.org/docs/16/libpq-connect.html#LIBPQ-CONNSTRING-URIS\n\nGood practice is to get connection URI from environment variable:\n\n```js\n// app.js\nimport { pgconnection } from 'https://raw.githubusercontent.com/kagis/pgwire/main/mod.js';\nconst pg = pgconnection(Deno.env.get('POSTGRES'));\n```\n\nSet `POSTGRES` environment variable when run process:\n\n```sh\n$ POSTGRES='postgres://USER:PASSWORD@HOST:PORT/DATABASE' deno run --allow-env --allow-net app.js\n```\n\n`pgconnection()` function also accepts parameters as object:\n\n```js\nconst pg = pgconnection({\n  host: '127.0.0.1',\n  port: 5432,\n  user: 'postgres',\n  password: 'postgres',\n  database: 'postgres',\n});\n```\n\nIts possible to pass multiple connection URIs or objects to `pgconnection()` function. In this case actual connection parameters will be computed by merging all parameters in first-win priority. Following technique can be used to force specific parameters values or provide default-fallback values:\n\n```js\nconst pg = pgconnection(Deno.env.get('POSTGRES'), {\n  // use default application_name if not set in env\n  application_name: 'my-awesome-app',\n});\n```\n\nDon't forget to `.end()` connection when you don't need it anymore:\n\n```js\nconst pg = pgconnection(Deno.env.get('POSTGRES'));\ntry {\n  // use `pg`\n} finally {\n  await pg.end();\n}\n```\n\n# Using pgwire in web server\n\n```js\n// app.js\nimport { pgpool } from 'https://raw.githubusercontent.com/kagis/pgwire/main/mod.js';\n\nconst pg = pgpool(Deno.env.get('POSTGRES'));\ntry {\n  const websrv = Deno.serve({\n    handler(_req) {\n      const [greeting] = await pg.query(`SELECT 'hello world, ' || now()`);\n      return new Response(greeting);\n    },\n  });\n  await websrv.finished;\n} finally {\n  await pg.end();\n}\n```\n\n```sh\n$ POSTGRES='postgres://USER:PASSWORD@HOST:PORT/DATABASE?_poolSize=4\u0026_poolIdleTimeout=5min' deno run --allow-env --allow-net app.js\n```\n\nWhen `_poolSize` is not set then a new connection is created for each query. This option makes possible to switch to external connection pool like pgBouncer.\n\n# Querying\n\n```js\nconst { rows } = await pg.query(`\n  SELECT i, 'Mississippi'\n  FROM generate_series(1, 3) i\n`);\nassertEquals(rows, [\n  [1, 'Mississippi'],\n  [2, 'Mississippi'],\n  [3, 'Mississippi'],\n]);\n```\n\nFunction call and other single-value results can be accessed by array destructuring.\n\n```js\nconst [scalar] = await pg.query(`SELECT current_user`);\nassertEquals(scalar, 'postgres');\n```\n\n# Parametrized query\n\n```js\nconst { rows } = await pg.query({\n  statement: `\n    SELECT i, $1\n    FROM generate_series(1, $2) i\n  `,\n  params: [\n    { type: 'text', value: 'Mississippi' }, // $1\n    { type: 'int4', value: 3 }, // $2\n  ],\n});\nassertEquals(rows, [\n  [1, 'Mississippi'],\n  [2, 'Mississippi'],\n  [3, 'Mississippi'],\n]);\n```\n\nTODO Why no interpolation API\n\n# Multi-statement queries\n\nPostgres allows to execute multiple statements within a single query.\n\n```js\nconst {\n  results: [\n    , // skip CREATE TABLE category\n    , // skip CREATE TABLE product\n    { rows: categories },\n    { rows: products },\n  ],\n} = await pg.query(`\n  -- lets generate sample data\n  CREATE TEMP TABLE category(id) AS VALUES\n  ('fruits'),\n  ('vegetables');\n\n  CREATE TEMP TABLE product(id, category_id) AS VALUES\n  ('apple', 'fruits'),\n  ('banana', 'fruits'),\n  ('carrot', 'vegetables');\n\n  -- then select all generated data\n  SELECT id FROM category ORDER BY id;\n  SELECT id, category_id FROM product ORDER BY id;\n`);\n\nassertEquals(categories, [\n  ['fruits'],\n  ['vegetables'],\n]);\nassertEquals(products, [\n  ['apple', 'fruits'],\n  ['banana', 'fruits'],\n  ['carrot', 'vegetables'],\n]);\n```\n\nPostgres wraps multi-statement query into transaction implicitly. Implicit transaction does rollback automatically when error occures or does commit when all statements successfully executed. Multi-statement queries and implicit transactions are described here https://www.postgresql.org/docs/16/protocol-flow.html#PROTOCOL-FLOW-MULTI-STATEMENT\n\nTop level `rows` accessor will contain rows returned by last SELECTish statement.\nIterator accessor will iterate over first row returned by last SELECTish statement.\n\n```js\nconst retval = await client.query(`\n  SELECT id FROM category;\n\n  -- Lets select products before update.\n  -- This is last SELECTish statement\n  SELECT id, category_id FROM product ORDER BY id FOR UPDATE;\n\n  -- UPDATE is not SELECTish (unless RETURING used)\n  UPDATE product SET category_id = 'food';\n`);\n\nconst { rows } = retval;\nassertEquals(rows, [\n  ['apple', 'fruits'],\n  ['banana', 'fruits'],\n  ['carrot', 'vegetables'],\n]);\n\nconst [topProductId, topProductCategory] = retval;\nassertEquals(topProductId, 'apple');\nassertEquals(topProductCategory, 'fruits');\n```\n\n# Large datasets streaming\n\nThere are two ways of fetching query result.\nFirst way is to call `await pg.query()`. In this case all rows will be loaded in memory.\n\nOther way to consume result is to call `pg.stream()` and iterate over data chunks.\n\n```js\nconst iterable = pg.stream(`SELECT i FROM generate_series(1, 2000) i`);\nlet sum = 0;\nfor await (const chunk of iterable)\nfor (const [i] of chunk.rows) {\n  sum += i;\n}\n// sum of natural numbers from 1 to 2000\nassertEquals(sum, 2001000);\n```\n\n`pg.stream()` accepts the same parameters as `pg.query()`, supports parametrized and multistatement queries.\n\nTODO describe chunk shape\n\n# Copy from stdin\n\nIf statement is `COPY ... FROM STDIN` then `stdin` parameter must be set.\n\n```js\nasync function * generateData() {\n  const utf8enc = new TextEncoder();\n  for (let i = 1; i \u003c= 3; i++) {\n    yield utf8enc.encode(i + '\\t' + 'Mississipi' + '\\n');\n  }\n}\n\nawait pg.query({\n  statement: `COPY foo FROM STDIN`,\n  stdin: generateData(),\n});\n```\n\n# Copy to stdout\n\n```js\nconst upstream = pg.stream({\n  statement: `COPY foo TO STDOUT`,\n});\n\nconst utf8dec = new TextDecoder();\nlet result = '';\nfor await (const chunk of upstream) {\n  result += utf8dec.decode(chunk);\n}\n\nassertEquals(result,\n  '1\\tMississippi\\n' +\n  '2\\tMississippi\\n' +\n  '3\\tMississippi\\n'\n));\n```\n\n# Listen and Notify\n\nhttps://www.postgresql.org/docs/11/sql-notify.html\n\n```js\npg.onnotification = ({ pid, channel, payload }) =\u003e {\n  try {\n    console.log(pid, channel, payload);\n  } catch (err) {\n    // handle error or let process exit\n  }\n});\nawait pg.query(`LISTEN some_channel`);\n```\n\nTODO back preassure doc\n\n# Simple and Extended query protocols\n\nPostgres has two query protocols - simple and extended. Simple protocol allows to send multi-statement query as single script where statements are delimited by semicolon. **pgwire** utilizes simple protocol when `.query()` is called with a string in first argument:\n\n```js\nawait pg.query(`\n  CREATE TABLE foo (a int, b text);\n  INSERT INTO foo VALUES (1, 'hello');\n  COPY foo FROM STDIN;\n  SELECT * FROM foo;\n`, {\n  // optional stdin for COPY FROM STDIN statements\n  stdin: fs.createReadableStream('/tmp/file1.tsv'),\n  // stdin also accepts array of streams for multiple\n  // COPY FROM STDIN statements\n  stdins: [\n    fs.createReadableStream(...),\n    fs.createReadableStream(...),\n    ...\n  ],\n});\n```\n\nExtended query protocol allows to pass parameters for each statement so it splits statements into separate chunks. Its possible to use extended query protocol by passing one or more statement objects to `.query()` function:\n\n```js\nawait pg.query({\n  statement: `CREATE TABLE foo (a int, b text)`,\n}, {\n  statement: `INSERT INTO foo VALUES ($1, $2)`,\n  params: [\n    { type: 'int4', value: 1 },       // $1\n    { type: 'text', value: 'hello' }, // $2\n  ],\n}, {\n  statement: 'COPY foo FROM STDIN',\n  stdin: fs.createReadableStream('/tmp/file1.tsv'),\n}, {\n  statement: 'SELECT * FROM foo',\n});\n```\n\n# Logical replication\n\nLogical replication is native PostgreSQL mechanism which allows your app to subscribe on data modification events such as insert, update, delete and truncate. This mechanism can be useful in different ways - replicas synchronization, cache invalidation or history tracking. https://www.postgresql.org/docs/11/logical-replication.html\n\nLets prepare database for logical replication. At first we need to configure PostgreSQL server to write enough information to WAL:\n\n```sql\nALTER SYSTEM SET wal_level = logical;\n-- then restart postgres server\n```\n\nWe need to create replication slot for our app. Replication slot is PostgreSQL entity which behaves like a message queue of replication events.\n\n```sql\nSELECT pg_create_logical_replication_slot(\n  'my-app-slot',\n  'test_decoding' -- logical decoder plugin\n);\n```\n\nGenerate some modification events:\n\n```sql\nCREATE TABLE foo(a INT NOT NULL PRIMARY KEY, b TEXT);\nINSERT INTO foo VALUES (1, 'hello'), (2, 'world');\nUPDATE foo SET b = 'all' WHERE a = 1;\nDELETE FROM foo WHERE a = 2;\nTRUNCATE foo;\n```\n\nNow we are ready to consume replication messages:\n\n```js\nimport { pgconnection } from 'https://raw.githubusercontent.com/kagis/pgwire/main/mod.js';\n\nconst pg = pgconnection({ replication: 'database' }, Deno.env.get('POSTGRES'));\ntry {\n  const replicationStream = pg.logicalReplication({ slot: 'my-app-slot' });\n  const utf8dec = new TextDecoder();\n  for await (const { lastLsn, messages } of replicationStream) {\n    for (const { lsn, data } of messages) {\n      // consume message somehow\n      console.log(lsn, utf8dec.decode(data));\n    }\n    replicationStream.ack(lastLsn);\n  }\n} finally {\n  await pg.end();\n}\n```\n\n# \"pgoutput\" logical replication decoder\n\nModification events go through plugable logical decoder before events emitted to client. Purpose of logical decoder is to serialize in-memory events structures into consumable message stream. PostgreSQL has two built-in logical decoders:\n\n- `test_decoding` which emits human readable messages for debugging and testing,\n\n- and production usable `pgoutput` logical decoder which adapts modification events for replicas synchronization. **pgwire** implements `pgoutput` messages parser.\n\n```sql\nCREATE TABLE foo(a INT NOT NULL PRIMARY KEY, b TEXT);\nCREATE PUBLICATION \"my-app-pub\" FOR TABLE foo;\n\nSELECT pg_create_logical_replication_slot(\n  'my-app-slot',\n  'pgoutput' -- logical decoder plugin\n)\n```\n\n```js\nimport { pgconnection } from 'https://raw.githubusercontent.com/kagis/pgwire/main/mod.js';\n\nconst pg = pgconnection({ replication: 'database' }, Deno.env.get('POSTGRES'));\ntry {\n  const replicationStream = pg.logicalReplication({\n    slot: 'my-app-slot',\n    options: {\n      'proto_version': 1,\n      'publication_names': 'my-app-pub',\n    },\n  });\n  for await (const { lastLsn, messages } of replicationStream.pgoutputDecode()) {\n    for (const pgomsg of messages) {\n      // consume pgomsg\n    }\n    replicationStream.ack(lastLsn);\n  }\n} finally {\n  await pg.end();\n}\n```\n\n# .query()\n\n```js\nconst pgresult = await conn.query(` SELECT 'hello', 'world' `);\n\n// PgResult is Iterable, which iterates over\n// first row of last SELECTish statement.\nconst [hello, world] = pgresult;\nassertEquals(hello, 'hello');\nassertEquals(world, 'world');\n\nassertEquals(pgresult.rows, [['hello', 'world']]);\nassertEquals(pgresult.status, 'SELECT 1');\n\nfor (const column of pgresult.columns) {\n  column.name;\n  column.binary;\n  column.typeOid;\n  column.typeMod;\n  column.typeSize;\n  column.tableOid;\n  column.tableColumn;\n}\n\nfor (const sub of pgresult.results) {\n  sub.rows;\n  sub.status;\n\n  // see pgresult.columns above\n  sub.columns;\n}\n\nfor (const notice of pgresult.notices) {\n  notice.severity;\n  notice.message;\n}\n```\n\n# .stream()\n\n# pgoutput messages\n\n```js\nfor await (const chunk of replstream.pgoutputDecode()) {\n  // (string) Last valid received lsn.\n  // Use it for replstream.ack() to confirm receipt of whole chunk.\n  chunk.lastLsn;\n  // (bigint) Time of last received message. Microseconds since unix epoch.\n  chunk.lastTime;\n\n  for (const pgomsg of chunk.messages) {\n    // (string | null) Log Serial Number of message.\n    // Use it for replstream.ack() to confirm receipt of message.\n    // TODO describe nullability.\n    pgomsg.lsn;\n    // (bigint) The server's system clock at the time of transmission,\n    // as microseconds since Unix epoch.\n    pgomsg.time;\n\n    switch (pgomsg.tag) {\n\n      // Transaction start boundary.\n      case 'begin':\n        // (string) Equals to `.commitLsn` of following `commit` message\n        // https://github.com/postgres/postgres/blob/27b77ecf9f4d5be211900eda54d8155ada50d696/src/include/replication/reorderbuffer.h#L275\n        pgomsg.commitLsn;\n        // (bigint) Microseconds since unix epoch.\n        pgomsg.commitTime;\n        // (number) Transaction id.\n        pgomsg.xid;\n\n      case 'origin':\n        // (string)\n        pgomsg.originName;\n        // (string)\n        pgomsg.originLsn;\n\n      // Emitted for user defined types which are used\n      // in following `relation` message\n      case 'type':\n        // (string)\n        pgomsg.typeOid;\n        // (string)\n        pgomsg.typeSchema;\n        // (string)\n        pgomsg.typeName;\n\n      // Relation (table) description.\n      // Emmited once per relation before\n      // `insert`/`update`/`delete`/`truncate` messages\n      // which references this relation.\n      case 'relation':\n        // (number) pg_class reference.\n        pgomsg.relationOid;\n        // (string) Relation (table) schema name\n        pgomsg.schema;\n        // (string) Relation (table) name\n        pgomsg.name:\n        // ('default' | 'nothing'| 'full' | 'index')\n        // https://www.postgresql.org/docs/14/sql-altertable.html#SQL-ALTERTABLE-REPLICA-IDENTITY\n        pgomsg.replicaIdentity;\n        // (object[]) Relation columns descriptions\n        for (const column of pgomsg.columns) {\n          // (number) 0b1 if column is part of replica identity\n          column.flags;\n          // (string)\n          column.name;\n          // (number)\n          column.typeOid;\n          // (number)\n          column.typeMod;\n          // (string | null)\n          column.typeName;\n          // (string | null)\n          column.typeSchema;\n        }\n\n      case 'insert':\n        // (object) Associated relation.\n        pgomsg.relation;\n        // (object) Inserted row values.\n        pgomsg.after;\n\n      case 'update':\n        // (object) Associated relation.\n        pgomsg.relation;\n        // (object | null) If pgomsg.relation.replicaIdentity == 'full'\n        // then gets row values before update.\n        pgomsg.before;\n        // (object) Row values after update.\n        // If pgomsg.relation.replicaIdentity != 'full'\n        // then unchanged TOASTed values will be undefined.\n        // https://www.postgresql.org/docs/14/storage-toast.html\n        pgomsg.after;\n\n      case 'delete':\n        // (object) Associated relation.\n        pgomsg.relation;\n        // (object | null) If pgomsg.relation.replicaIdentity == 'full'\n        // then gets deleted row values, otherwise gets null\n        pgomsg.before;\n\n      case 'truncate':\n        // (boolean)\n        pgomsg.cascade;\n        // (boolean)\n        pgomsg.restartIdentity;\n        // (object[]) Truncated relations descriptions.\n        pgomsg.relations;\n\n      // pg_logical_emit_message\n      // https://www.postgresql.org/docs/14/functions-admin.html#id-1.5.8.33.8.5.2.2.22.1.1.1\n      case 'message':\n        // (string)\n        pgomsg.prefix;\n        // (Uint8Array)\n        pgomsg.content;\n        // (string) Equals to lsn which `pg_logical_emit_message` returns.\n        pgomsg.messageLsn;\n        // (boolean)\n        pgomsg.transactional;\n\n      // Transaction end boundary.\n      case 'commit':\n        // (string) Equals to `.commitLsn` of preceding `begin` message.\n        pgomsg.commitLsn;\n        // (bigint) Microseconds since unix epoch.\n        pgomsg.commitTime;\n    }\n  }\n}\n```\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fkagis%2Fpgwire","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fkagis%2Fpgwire","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fkagis%2Fpgwire/lists"}