{"id":47733444,"url":"https://github.com/kinetexjs/kinetex","last_synced_at":"2026-04-02T22:02:29.145Z","repository":{"id":348569356,"uuid":"1198718890","full_name":"kinetexjs/kinetex","owner":"kinetexjs","description":"The universal HTTP client.","archived":false,"fork":false,"pushed_at":"2026-04-01T21:18:42.000Z","size":175,"stargazers_count":0,"open_issues_count":0,"forks_count":0,"subscribers_count":0,"default_branch":"main","last_synced_at":"2026-04-02T04:42:36.201Z","etag":null,"topics":["browser","bun","cloudflare","deno","fetch-api","fetch-client","http","http-client","https","https-client","nodefetch","nodejs","undici"],"latest_commit_sha":null,"homepage":"https://kinetexjs.github.io/kinetex/","language":"JavaScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/kinetexjs.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":".github/CODEOWNERS","security":"SECURITY.md","support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2026-04-01T17:34:15.000Z","updated_at":"2026-04-01T21:18:46.000Z","dependencies_parsed_at":null,"dependency_job_id":null,"html_url":"https://github.com/kinetexjs/kinetex","commit_stats":null,"previous_names":["kinetexjs/kinetex"],"tags_count":2,"template":false,"template_full_name":null,"purl":"pkg:github/kinetexjs/kinetex","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/kinetexjs%2Fkinetex","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/kinetexjs%2Fkinetex/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/kinetexjs%2Fkinetex/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/kinetexjs%2Fkinetex/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/kinetexjs","download_url":"https://codeload.github.com/kinetexjs/kinetex/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/kinetexjs%2Fkinetex/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31317831,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-02T21:35:00.834Z","status":"ssl_error","status_checked_at":"2026-04-02T21:34:59.806Z","response_time":89,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.6:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["browser","bun","cloudflare","deno","fetch-api","fetch-client","http","http-client","https","https-client","nodefetch","nodejs","undici"],"created_at":"2026-04-02T22:02:27.290Z","updated_at":"2026-04-02T22:02:29.140Z","avatar_url":"https://github.com/kinetexjs.png","language":"JavaScript","readme":"\u003ca href=\"https://ibb.co/yc8jvmGx\"\u003e\u003cimg src=\"https://i.ibb.co/tTbrjwk1/1775122989290.png\" alt=\"1775122989290\" border=\"0\"\u003e\u003c/a\u003e\n\n\u003e **The universal HTTP client for the modern JavaScript ecosystem.**  \n\u003e Zero runtime dependencies. Every runtime. Every style.\n\n[![NPM](https://img.shields.io/npm/v/kinetex)](https://www.npmjs.com/package/kinetex)\n[![JSR](https://jsr.io/badges/@kinetexjs/kinetex)](https://jsr.io/@kinetexjs/kinetex)\n[![TypeScript](https://img.shields.io/badge/TypeScript-5.x-blue)](https://www.typescriptlang.org/)\n[![Coverage](https://codecov.io/gh/kinetexjs/kinetex/branch/main/graph/badge.svg)](https://codecov.io/gh/kinetexjs/kinetex)\n[![Downloads](https://img.shields.io/npm/dw/kinetex?style=flat-square\u0026label=Downloads\u0026color=green)](https://npmjs.com/package/kinetex)\n[![Docs](https://img.shields.io/badge/docs-GitHub%20Pages-blueviolet)](https://kinetexjs.github.io/kinetex)\n\n---\n\n## Why kinetex?\n\nkinetex is a **batteries-included fetch wrapper** for modern JavaScript runtimes (Node 18+, browsers, Deno, Bun, Cloudflare Workers). It sits above `fetch` and optionally `undici`, adding the things you would otherwise wire up yourself:\n\n- Built-in auth (Basic, Bearer, OAuth 2.0, AWS SigV4, Digest, API key)\n- Stale-while-revalidate cache with custom store support\n- Retry with exponential back-off\n- Request deduplication\n- SSE streaming with auto-reconnect\n- GraphQL helper, HAR recording, cookie jar, proxy middleware\n- A composable middleware pipeline\n- End-to-end TypeScript generics\n\n**What it is not:** a low-level HTTP client. It does not reimplement HTTP/2, connection pooling, or TLS — those come from `undici` (optional peer dep) or the platform's native `fetch`. If you need raw socket control or maximum throughput without any abstraction overhead, use undici directly.\n\nSee [`benchmarks/http-comparison.mjs`](benchmarks/http-comparison.mjs) for an honest performance comparison.\n\n---\n\n## ⚡ Performance Benchmarks\n\nKinetex is engineered for zero-cost abstraction. By leveraging a highly optimized middleware pipeline and the `undici` engine, it provides a \"luxury\" feature set (Auth, Retries, Cache, Hooks) with performance that rivals or beats native low-level clients.\n\n### 📊 Comparison Table\n*Measured on Node.js v22.22.1 (Small JSON payload, 500 measured + 50 warmup, Concurrency: 1)*\n\n| Client | Req/s | Mean Latency | p99 Latency | Heap Δ (KB/req) |\n| :--- | :---: | :---: | :---: | :---: |\n| **Kinetex + Undici** ★ | **585** | **1.71 ms** | **2.73 ms** | **4.5 KB** |\n| [Axios](https://github.com/axios/axios) | 570 | 1.75 ms | 3.71 ms | 16.6 KB |\n| [Node-Fetch](https://github.com/node-fetch/node-fetch) | 518 | 1.93 ms | 3.92 ms | 14.5 KB |\n| [Native Fetch](https://nodejs.org/api/globals.html#fetch) | 511 | 1.96 ms | 4.94 ms | 21.6 KB |\n| **Kinetex (Standard)** ★ | **509** | **1.96 ms** | **4.86 ms** | **5.4 KB** |\n| [Got](https://github.com/sindresorhus/got) | 508 | 1.97 ms | 2.73 ms | 18.5 KB |\n| [Ky](https://github.com/sindresorhus/ky) | 485 | 2.06 ms | 2.61 ms | 23.2 KB |\n| [Superagent](https://github.com/ladjs/superagent) | 386 | 2.59 ms | 4.42 ms | 22.9 KB |\n\n---\n\n### 🚀 Key Takeaways\n\n#### 1. Zero-Cost Middleware Pipeline\nThe Kinetex pipeline adds only **~0.007ms** of overhead per request. You get advanced features like AWS SigV4, automatic retries, and request deduplication for essentially zero CPU tax. Once the V8 JIT compiler optimizes the pipeline, the overhead becomes practically immeasurable.\n\n#### 2. Extreme Memory Efficiency\nKinetex uses **75% less memory** per request than native fetch and **70% less** than Axios. In high-traffic production environments, this significantly reduces **Garbage Collection (GC) pressure**, leading to lower CPU spikes and a smaller infrastructure footprint.\n\n#### 3. Tail Latency Stability\nWith a p99 latency of **2.73ms**, Kinetex is more predictable under load than native fetch (4.94ms). This prevents the random \"lag spikes\" often seen in microservices when using standard promise-based wrappers.\n\n#### 4. Green Computing\nBy reducing heap allocation by **15-18KB per request** compared to competitors, Kinetex allows your containers to handle more concurrent traffic with less RAM, directly lowering your cloud computing costs.\n\n---\n\n\u003e **Note:** While `undici (direct)` and `node:http (raw)` are faster, they lack the high-level features (middleware, automatic JSON parsing, easy auth) that Kinetex provides. Kinetex aims to be the fastest **feature-complete** HTTP client for the Node.js ecosystem.\n\n*Run the benchmarks yourself:*\n```bash\nnode --expose-gc benchmarks/http-comparison.mjs\n```\n---\n\n## Table of Contents\n\n- [Installation](#installation)\n- [Quick Start](#quick-start)\n- [Runtime Support](#runtime-support)\n- [API Styles](#api-styles)\n  - [Async / Await](#async--await)\n  - [Fluent Chain](#fluent-chain)\n  - [Callback](#callback)\n- [Instances \u0026 Configuration](#instances--configuration)\n- [Request Config Reference](#request-config-reference)\n- [Response Object](#response-object)\n- [Authentication](#authentication)\n- [Schema Validation](#schema-validation)\n- [Caching](#caching)\n- [Retry](#retry)\n- [Interceptors](#interceptors)\n- [Lifecycle Hooks](#lifecycle-hooks)\n- [Middleware Pipeline](#middleware-pipeline)\n- [SSE Streaming](#sse-streaming)\n- [GraphQL](#graphql)\n- [File Upload \u0026 Progress](#file-upload--progress)\n- [HAR Recording](#har-recording)\n- [Cookie Jar](#cookie-jar)\n- [Proxy \u0026 SOCKS5](#proxy--socks5)\n- [Concurrency \u0026 Rate Limiting](#concurrency--rate-limiting)\n- [Response Size Limiting](#response-size-limiting)\n- [HTTP/2 and HTTP/3](#http2-and-http3)\n- [Error Handling](#error-handling)\n- [Request Deduplication](#request-deduplication)\n- [Logging](#logging)\n- [CDN / Browser Usage](#cdn--browser-usage)\n- [Development](#development)\n\n---\n\n## Installation\n\n```bash\nnpm install kinetex\n```\n\n### Optional peer dependencies\n\nInstall only what you need — kinetex works without any of them.\n\n```bash\nnpm install undici   # HTTP/2 + HTTP/3 in Node.js (auto-detected)\nnpm install zod      # Schema validation\nnpm install valibot  # Alternative schema validation\nnpm install socks-proxy-agent    # SOCKS5 proxy support\n```\n\n---\n\n## Quick Start\n\n```ts\nimport kinetex from 'kinetex';\n\n// Simple GET\nconst { data } = await kinetex.get\u003cUser\u003e('https://api.example.com/users/1');\nconsole.log(data.name);\n\n// POST with JSON body\nconst { data: created } = await kinetex.post('https://api.example.com/users', {\n  json: { name: 'Alice', email: 'alice@example.com' },\n});\n\n// Fluent chain\nconst { data } = await kinetex\n  .chain('https://api.example.com/users')\n  .bearer('my-token')\n  .query({ page: 1, limit: 20 })\n  .timeout(5000)\n  .retry(3);\n```\n\n---\n\n## Runtime Support\n\nkinetex works identically across all major JavaScript runtimes. Import it the same way everywhere.\n\n### Node.js\n\n```ts\nimport kinetex from 'kinetex';   // ESM ✅\n\nconst kinetex = require('kinetex');  // CJS ✅\n\nconst { data } = await kinetex.get('https://api.example.com/users');\n```\n\n### Deno\n\n```ts\nimport kinetex, { create } from 'npm:kinetex';\n// or via JSR:\nimport kinetex from 'jsr:@kinetexjs/kinetex';\n\nconst { data } = await kinetex.get('https://api.example.com/users');\n```\n\n\u003e **Note:** `proxyMiddleware()` has no effect in Deno — configure proxies at the OS level or use the `HTTPS_PROXY` environment variable.\n\n### Bun\n\n```ts\nimport kinetex from 'kinetex'; // identical to Node.js\n```\n\n### Cloudflare Workers\n\n```ts\nimport { create, auth } from 'kinetex';\n\nconst baseApi = create({ baseURL: 'https://api.example.com', timeout: 5000 });\n\nexport default {\n  async fetch(request: Request, env: Env): Promise\u003cResponse\u003e {\n    const api = baseApi.extend(auth.bearer(env.API_TOKEN));\n    const { data } = await api.get('/users');\n    return Response.json(data);\n  },\n};\n\ninterface Env { API_TOKEN: string; }\n```\n\n### Browser / CDN\n\n```html\n\u003c!-- unpkg --\u003e\n\u003cscript src=\"https://unpkg.com/kinetex/dist/browser/kinetex.min.js\"\u003e\u003c/script\u003e\n\n\u003c!-- jsDelivr --\u003e\n\u003cscript src=\"https://cdn.jsdelivr.net/npm/kinetex/dist/browser/kinetex.min.js\"\u003e\u003c/script\u003e\n\n\u003cscript\u003e\n  kinetex.get('https://api.example.com/data').then(({ data }) =\u003e console.log(data));\n\u003c/script\u003e\n```\n\nESM in the browser:\n\n```html\n\u003cscript type=\"module\"\u003e\n  import kinetex from 'https://cdn.jsdelivr.net/npm/kinetex/dist/browser/kinetex.esm.min.js';\n  const { data } = await kinetex.get('https://api.example.com/posts');\n\u003c/script\u003e\n```\n\n---\n\n## API Styles\n\n### Async / Await\n\nThe default style — clean, typed, predictable.\n\n```ts\nimport kinetex from 'kinetex';\n\n// GET shorthand (call instance directly)\nconst { data } = await kinetex\u003cUser[]\u003e('https://api.example.com/users');\n\n// HTTP verb methods\nconst { data } = await kinetex.get\u003cUser\u003e('/users/1');\nconst { data } = await kinetex.post('/users', { json: { name: 'Alice' } });\nconst { data } = await kinetex.put('/users/1', { json: { name: 'Bob' } });\nconst { data } = await kinetex.patch('/users/1', { json: { email: 'bob@example.com' } });\nconst { data } = await kinetex.delete('/users/1');\nconst { data } = await kinetex.head('/users');\nconst { data } = await kinetex.options('/users');\n```\n\n### Fluent Chain\n\nBuild requests incrementally. The chain is a `Promise` — `await` it when ready.\n\n```ts\nconst { data } = await kinetex\n  .chain('https://api.example.com/users')\n  .method('POST')\n  .bearer('my-token')\n  .header('X-Request-ID', crypto.randomUUID())\n  .json({ name: 'Alice' })\n  .timeout(5000)\n  .retry(3);\n```\n\n**Chain method reference:**\n\n| Method | Description |\n|--------|-------------|\n| `.method(m)` | HTTP method |\n| `.header(k, v)` | Add a single header |\n| `.headers(obj)` | Merge multiple headers |\n| `.query(params)` | URL search params |\n| `.send(body)` | Raw body |\n| `.json(data)` | JSON body — sets `Content-Type: application/json` |\n| `.form(data)` | URL-encoded form body |\n| `.multipart(data)` | `multipart/form-data` (FormData or plain record) |\n| `.auth(value)` | `Authorization` header verbatim |\n| `.bearer(token)` | `Authorization: Bearer \u003ctoken\u003e` |\n| `.basic(user, pass)` | `Authorization: Basic \u003cbase64\u003e` |\n| `.timeout(ms)` | Request timeout |\n| `.retry(n)` | Retry count |\n| `.accept(type)` | `Accept` header |\n| `.type(ct)` | `Content-Type` header |\n| `.schema(v)` | Validate response with zod / valibot / custom |\n| `.signal(s)` | `AbortSignal` for cancellation |\n| `.cancel()` | Returns an `AbortController` wired to this request — call `.abort()` on it when you want to cancel |\n| `.onUpload(fn)` | Upload progress callback |\n| `.onDownload(fn)` | Download progress callback |\n| `.as\u003cU\u003e()` | Re-type the response generic |\n\n```ts\n// Multipart upload\nconst fd = new FormData();\nfd.append('file', fileBlob, 'avatar.png');\nawait kinetex.chain('/upload').multipart(fd);\n\n// Or from a plain record (auto-converted to FormData)\nawait kinetex.chain('/upload').multipart({ file: fileBlob, caption: 'photo' });\n\n// Cancel a request — call controller.abort() when you actually want to cancel.\n// cancel() does NOT abort immediately; the request fires lazily on first .then()\n// so you need the controller to exist before it starts.\nconst chain = kinetex.chain('https://api.example.com/long-poll');\nconst controller = chain.cancel(); // wires the signal — NOT aborted yet\n// ... start the request\nconst promise = chain; // triggers dispatch\n// ... later:\ncontroller.abort(); // now it cancels\n```\n\n### Callback\n\nDrop-in compatible with the `request` package callback style.\n\n```ts\nkinetex.callback('https://api.example.com/data', {}, (err, res, data) =\u003e {\n  if (err) return console.error(err);\n  console.log(data);\n});\n```\n\n---\n\n## Instances \u0026 Configuration\n\nCreate isolated instances with their own defaults, middleware, and interceptors.\n\n```ts\nimport { create } from 'kinetex';\n\nconst api = create({\n  baseURL: 'https://api.example.com',\n  timeout: 10_000,\n  retry: { limit: 3, delay: (n) =\u003e 100 * 2 ** n },\n  headers: { 'X-App': 'my-app/1.0' },\n});\n\n// Scoped sub-instance (inherits parent defaults)\nconst usersApi = api.create({ baseURL: 'https://api.example.com/users' });\n\n// Extend with middleware — returns a new instance, does not mutate\nconst authedApi = api.extend(auth.bearer('secret'));\n\n// Add middleware in-place — mutates this instance\napi.use(myLoggingMiddleware);\n\n// Cancel ALL in-flight requests on this instance (e.g. on component unmount)\napi.cancelAll();\n// The instance resets automatically — subsequent requests work normally\n```\n\n### Timeout granularity\n\n```ts\n// Simple: single number applies to the whole request lifecycle\nawait api.get('/data', { timeout: 5000 });\n\n// Granular: separate timeouts for connection vs full response\nawait api.get('/data', {\n  timeout: {\n    request: 30_000,  // abort if total request takes \u003e 30s\n    response: 5_000,  // abort if no first byte within 5s (TTFB guard)\n  },\n});\n```\n\n---\n\n## Request Config Reference\n\nEvery request method accepts a `RequestConfig` object. All fields are optional except `url`.\n\n```ts\ninterface RequestConfig\u003cT = unknown\u003e {\n  url: string;\n  method?: string;                      // default: 'GET'\n\n  // ── Headers \u0026 Body ─────────────────────────────────────────────────────\n  headers?: Record\u003cstring,string\u003e | [string,string][] | Headers;\n  body?: BodyInit;                      // raw body\n  json?: unknown;                       // serialised + Content-Type: application/json\n  form?: Record\u003cstring, string|number|boolean\u003e; // application/x-www-form-urlencoded\n  searchParams?: Record\u003cstring, string|number|boolean|string[]\u003e | URLSearchParams | string;\n\n  // ── Response ───────────────────────────────────────────────────────────\n  responseType?: 'json'|'text'|'blob'|'arrayBuffer'|'formData'|'stream';\n  schema?: ZodSchema | ValibotSchema | { parse(d: unknown): T };  // validates body\n  throwHttpErrors?: boolean;            // default: true\n  decompress?: boolean;                 // default: true — set false to get raw compressed bytes\n\n  // ── Network ───────────────────────────────────────────────────────────\n  baseURL?: string;\n  timeout?: number | {\n    request?: number;   // total request timeout (ms)\n    response?: number;  // TTFB timeout — aborts if first byte not received in time\n  };\n  retry?: number | RetryConfig;\n  cache?: CacheConfig | false;\n  dedupe?: boolean;                     // default: true for GET\n  signal?: AbortSignal;\n  credentials?: RequestCredentials;\n  followRedirects?: boolean;            // default: true\n  maxRedirects?: number;                // default: 10\n  transport?: 'fetch' | 'undici';      // force transport\n\n  // ── Callbacks ─────────────────────────────────────────────────────────\n  onUploadProgress?: (event: ProgressEvent) =\u003e void;   // Node + browser\n  onDownloadProgress?: (event: ProgressEvent) =\u003e void;\n\n  // ── Observability ─────────────────────────────────────────────────────\n  hooks?: HookConfig;\n  logger?: Logger;\n  har?: boolean;\n}\n```\n\n---\n\n## Response Object\n\nEvery request resolves to a `KinetexResponse\u003cT\u003e`:\n\n```ts\nconst res = await kinetex.get\u003cUser\u003e('/users/1');\n\nres.data        // T          — parsed body\nres.status      // number     — HTTP status code\nres.statusText  // string\nres.headers     // Headers    — response headers\nres.response    // Response   — original fetch Response (for streaming)\nres.request     // RequestConfig — originating request\nres.fromCache   // boolean    — true if served from cache\nres.retries     // number     — retry attempts made\nres.timing      // { start, end, duration, ttfb, ... }\nres.harEntry    // HarEntry   — present when har: true\n```\n\n---\n\n## Authentication\n\n```ts\nimport { auth, create } from 'kinetex';\n\n// Basic auth\ncreate().extend(auth.basic('username', 'password'));\n\n// Bearer token (static)\ncreate().extend(auth.bearer('my-token'));\n\n// Bearer token (async — refreshed on each request)\ncreate().extend(auth.bearer(async () =\u003e getAccessToken()));\n\n// API key in a header\ncreate().extend(auth.apiKey('key-value', { header: 'X-API-Key' }));\n\n// API key as a query param\ncreate().extend(auth.apiKey('key-value', { query: 'api_key' }));\n\n// OAuth 2.0 client credentials — auto-fetches and refreshes tokens\ncreate().extend(auth.oauth2({\n  tokenUrl: 'https://auth.example.com/oauth/token',\n  clientId: process.env.CLIENT_ID,\n  clientSecret: process.env.CLIENT_SECRET,\n  scope: 'read:users write:posts',\n  onToken: (token) =\u003e console.log('New token, expires in:', token.expires_in),\n}));\n\n// AWS SigV4 — signed using Web Crypto, no extra dependency\ncreate().extend(auth.aws({\n  accessKeyId: process.env.AWS_ACCESS_KEY_ID,\n  secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,\n  sessionToken: process.env.AWS_SESSION_TOKEN, // optional\n  region: 'us-east-1',\n  service: 's3',\n}));\n\n// Digest auth — RFC 2617 compliant MD5 (nginx, Apache compatible)\ncreate().extend(auth.digest('username', 'password'));\n```\n\n---\n\n## Schema Validation\n\nkinetex integrates with any schema library that exposes `.parse()` or `.safeParse()`.\n\n### Zod\n\n```ts\nimport { z } from 'zod';\n// import * as z from 'zod';\nimport kinetex from 'kinetex';\n\nconst UserSchema = z.object({\n  id: z.number(),\n  name: z.string(),\n  email: z.string().email(),\n});\n\n// data is typed and validated at runtime — throws ValidationError on mismatch\nconst { data } = await kinetex.get('https://api.example.com/users/1', {\n  schema: UserSchema,\n});\n\nconsole.log(data.name); // string ✅\n```\n\n### Valibot\n\n```ts\nimport * as v from 'valibot';\n\nconst UserSchema = v.object({ id: v.number(), name: v.string() });\nconst { data } = await kinetex.get('/users/1', { schema: UserSchema });\n```\n\n### Custom validator\n\nAny object with `.parse()` or `.safeParse()` works:\n\n```ts\nconst { data } = await kinetex.get('/users/1', {\n  schema: {\n    parse: (raw) =\u003e {\n      if (!raw || typeof raw !== 'object') throw new Error('Invalid');\n      return raw as User;\n    },\n  },\n});\n```\n\n### Chain API\n\n```ts\nconst { data } = await kinetex\n  .chain('/users/1')\n  .schema(UserSchema);\n```\n\n---\n\n## Caching\n\nBuilt-in in-memory cache with stale-while-revalidate support. Works with all response types.\n\nThe default cache store is **per-instance** — each `create()` call gets its own isolated store, so cache entries from one instance never bleed into another. To share a cache across instances, pass a custom `store` explicitly.\n\n```ts\n// Cache for 60s, serve stale for up to 5 more minutes while revalidating in the background\nconst { data, fromCache } = await kinetex.get('/api/config', {\n  cache: { ttl: 60_000, swr: 300_000 },\n});\nconsole.log('from cache?', fromCache);\n\n// Opt out for one request\nawait api.get('/realtime-data', { cache: false });\n\n// Custom cache key\nawait api.get('/users', {\n  cache: {\n    ttl: 30_000,\n    key: (req) =\u003e `v2:${req.url}`,\n  },\n});\n```\n\n### Custom store (e.g. Redis)\n\n```ts\nimport { create } from 'kinetex';\n\nconst api = create({\n  cache: {\n    ttl: 30_000,\n    store: {\n      get: (key) =\u003e redis.get(key).then(v =\u003e v ? JSON.parse(v) : undefined),\n      set: (key, entry) =\u003e redis.setex(\n        key,\n        Math.ceil((entry.expiresAt - Date.now()) / 1000),\n        JSON.stringify(entry)\n      ),\n      delete: (key) =\u003e redis.del(key),\n      clear: () =\u003e redis.flushdb(),\n    },\n  },\n});\n```\n\n---\n\n## Retry\n\n```ts\nconst { data } = await kinetex.get('/api/data', {\n  retry: {\n    limit: 4,\n    statusCodes: [429, 500, 502, 503, 504],\n    methods: ['GET', 'POST'],\n    delay: (attempt) =\u003e Math.min(200 * 2 ** attempt, 10_000), // exponential back-off\n    onNetworkError: true,\n    onRetry: (attempt, err, req) =\u003e console.warn(`Retry ${attempt} for ${req.url}:`, err),\n  },\n});\n\n// Shorthand — retries 3 times with defaults\nconst { data } = await kinetex.get('/api/data', { retry: 3 });\n```\n\n---\n\n## Interceptors\n\nAxios-compatible interceptor API. Interceptors run as part of the request pipeline.\n\n```ts\nimport { create } from 'kinetex';\n\nconst api = create({ baseURL: 'https://api.example.com' });\n\n// Request interceptor — add a tracing header to every request\nconst reqId = api.interceptors.request.use(async (config) =\u003e ({\n  ...config,\n  headers: { ...config.headers, 'X-Request-ID': crypto.randomUUID() },\n}));\n\n// Response interceptor — unwrap a nested payload\nconst resId = api.interceptors.response.use((response) =\u003e ({\n  ...response,\n  data: response.data?.payload ?? response.data,\n}));\n\n// Remove individual interceptors\napi.interceptors.request.eject(reqId);\napi.interceptors.response.eject(resId);\n```\n\n---\n\n## Lifecycle Hooks\n\nFine-grained hooks for observability, modification, and error handling. Set per-request or on an instance.\n\n```ts\nconst { data } = await kinetex.get('/api/data', {\n  hooks: {\n    // Modify the request before it is sent\n    beforeRequest: [\n      (config) =\u003e ({ ...config, url: config.url + '?source=kinetex' }),\n    ],\n    // Inspect or transform the response\n    afterResponse: [\n      (response) =\u003e {\n        console.log(`${response.status} in ${response.timing.duration}ms`);\n        return response;\n      },\n    ],\n    // React to any error (does not suppress it)\n    onError: [\n      (err) =\u003e metrics.increment('http.error', { status: err.response?.status }),\n    ],\n  },\n});\n```\n\n---\n\n## Middleware Pipeline\n\nThe full power of kinetex — compose arbitrary request/response transforms into the pipeline.\n\n```ts\nimport { create, compose } from 'kinetex';\nimport type { Middleware } from 'kinetex';\n\n// Write a middleware\nconst timingMiddleware: Middleware = async (request, next) =\u003e {\n  const start = Date.now();\n  const response = await next(request);\n  console.log(`${request.method} ${request.url} — ${Date.now() - start}ms`);\n  return response;\n};\n\nconst api = create({ baseURL: 'https://api.example.com' });\napi.use(timingMiddleware);\n\n// Compose multiple middlewares manually\nconst pipeline = compose([timingMiddleware, authMiddleware], coreHandler);\n```\n\n---\n\n## SSE Streaming\n\nServer-Sent Events with automatic reconnection, `Last-Event-ID` tracking, and server-side `retry:` interval support.\n\n```ts\nimport { sse } from 'kinetex/plugins';\n\nconst controller = new AbortController();\nsetTimeout(() =\u003e controller.abort(), 30_000); // 30s limit\n\nfor await (const event of sse('https://api.example.com/events', {\n  signal: controller.signal,\n  headers: { Authorization: 'Bearer my-token' },\n  maxRetries: 10,   // Infinity by default\n})) {\n  console.log(`[${event.event ?? 'message'}]`, event.data);\n  if (event.id) console.log('event id:', event.id);\n  if (event.event === 'done') break;\n}\n```\n\n**SSE event fields:**\n\n```ts\ninterface SSEEvent {\n  data: string;        // event data\n  event?: string;      // event type (from \"event:\" line)\n  id?: string;         // event id (from \"id:\" line)\n  retry?: number;      // retry interval in ms (from \"retry:\" line)\n}\n```\n\n---\n\n## GraphQL\n\nType-safe GraphQL requests with automatic error detection.\n\n```ts\nimport { create } from 'kinetex';\nimport { graphqlPlugin } from 'kinetex/plugins';\n\nconst api = create({ baseURL: 'https://api.example.com' });\n\ninterface UserData { user: { id: string; name: string } }\ninterface UserVars { id: string }\n\nconst config = graphqlPlugin\u003cUserVars, UserData\u003e('/graphql', {\n  query: `\n    query GetUser($id: ID!) {\n      user(id: $id) { id name }\n    }\n  `,\n  variables: { id: '42' },\n  operationName: 'GetUser',\n});\n\nconst { data } = await api.post(config.url, config);\n// Throws automatically if data.errors is present\nconsole.log(data.data?.user.name);\n```\n\n---\n\n## File Upload \u0026 Progress\n\n### Multipart upload\n\n```ts\nconst form = new FormData();\nform.append('title', 'My Upload');\nform.append('file', fileBlob, 'document.pdf');\n\nconst { data } = await kinetex.post('/upload', { body: form });\n```\n\n### Upload progress\n\nIn Node.js, `onUploadProgress` wraps the request body in a progress-tracking `ReadableStream` that emits events as undici reads each chunk. In Bun, upload progress uses Bun's native fetch streaming. In the browser, kinetex automatically falls back to `XMLHttpRequest` to provide real upload progress events.\n\n```ts\nimport type { ProgressEvent as KinetexProgressEvent } from 'kinetex';\n\nawait kinetex.post('/upload', {\n  body: fileBuffer,\n  onUploadProgress: (event: KinetexProgressEvent) =\u003e {\n    console.log(`${event.percent?.toFixed(0)}% — ${(event.bytesPerSecond / 1024).toFixed(1)} KB/s`);\n  },\n});\n```\n\n### Download progress\n\n```ts\nawait kinetex.get('/large-file.zip', {\n  responseType: 'blob',\n  onDownloadProgress: ({ loaded, total, percent }) =\u003e {\n    progressBar.value = percent ?? 0;\n  },\n});\n```\n\n**`ProgressEvent` fields:**\n\n```ts\ninterface ProgressEvent {\n  loaded: number;               // bytes transferred so far\n  total: number | undefined;    // total bytes (undefined if unknown)\n  percent: number | undefined;  // 0–100 (undefined if total unknown)\n  transferredBytes: number;     // same as loaded\n  bytesPerSecond: number;       // current transfer speed\n}\n```\n\n---\n\n## HAR Recording\n\nRecord all HTTP traffic as a [HAR 1.2](https://www.softwareishard.com/blog/har-12-spec/) file for debugging, testing, or replaying.\n\n```ts\nimport { create } from 'kinetex';\nimport { writeFileSync } from 'node:fs';\n\nconst api = create();\n\nawait api.get('https://api.example.com/users', { har: true });\nawait api.post('https://api.example.com/posts', { json: { title: 'test' }, har: true });\n\nconst har = api.exportHAR();\n// har.log.version === '1.2'\n// har.log.entries — array of HarEntry\n\nwriteFileSync('trace.har', JSON.stringify(har, null, 2));\n// Open in Chrome DevTools → Network → Import HAR\n```\n\n---\n\n## Cookie Jar\n\nRFC 6265 compliant cookie jar — no dependencies.\n\n```ts\nimport { create } from 'kinetex';\nimport { cookieJar, withCookies } from 'kinetex/plugins';\n\nconst jar = cookieJar();\nconst api = create().use(withCookies(jar));\n\n// Cookies from Set-Cookie headers are stored automatically\nawait api.get('https://example.com/login');\n\n// Stored cookies are sent automatically on subsequent requests\nawait api.get('https://example.com/dashboard');\n\n// Manual jar operations\nconst all = jar.getAll();                              // all stored cookies\nconst header = jar.getCookieString('https://example.com/');  // \"name=value; ...\"\njar.delete('session', 'example.com', '/');            // remove one cookie\njar.clear();                                           // remove all\n```\n\n---\n\n## Proxy \u0026 SOCKS5\n\n\u003e **Node.js and Bun only.** Calling proxy middleware in browser, edge, or Deno environments logs a `console.warn` and falls through — there is no silent failure.\n\n```ts\nimport { create } from 'kinetex';\nimport { proxyMiddleware, envProxy } from 'kinetex/plugins';\n\n// Explicit HTTP/HTTPS proxy\nconst api = create().use(proxyMiddleware({\n  url: 'http://proxy.corp.internal:3128',\n  auth: { username: 'user', password: 'pass' },\n  noProxy: ['internal.corp.com', '.local'],\n}));\n\n// Read from environment variables (HTTP_PROXY, HTTPS_PROXY, NO_PROXY)\nconst api2 = create().use(envProxy());\n\n// SOCKS5 proxy — requires: npm install socks-proxy-agent\nconst api3 = create().use(proxyMiddleware({\n  url: 'socks5://proxy.example.com:1080',\n  protocol: 'socks5',\n}));\n```\n\n**`ProxyConfig` fields:**\n\n```ts\ninterface ProxyConfig {\n  url?: string;                          // proxy URL\n  protocol?: 'http' | 'https' | 'socks5';\n  auth?: { username: string; password: string };\n  noProxy?: string[];                    // hostnames / patterns to bypass\n  headers?: Record\u003cstring, string\u003e;      // extra headers sent to proxy\n}\n```\n\n---\n\n## Concurrency \u0026 Rate Limiting\n\n```ts\nimport { create } from 'kinetex';\nimport { concurrencyLimit, rateLimit } from 'kinetex/plugins';\n\nconst api = create({ baseURL: 'https://api.example.com' })\n  .use(concurrencyLimit(5))                          // max 5 in-flight requests\n  .use(rateLimit({ requestsPerSecond: 10, burst: 20 })); // token-bucket rate limiter\n```\n\n**`rateLimit` options:**\n\n| Option | Type | Description |\n|--------|------|-------------|\n| `requestsPerSecond` | `number` | Sustained rate |\n| `burst` | `number` | Max burst size (default: `requestsPerSecond`) |\n\n---\n\n## Response Size Limiting\n\nProtect against unexpectedly large responses. Checks `Content-Length` header first, then falls back to measuring the actual body.\n\n```ts\nimport { create, ResponseSizeError } from 'kinetex/plugins';\nimport { responseSizeLimit } from 'kinetex/plugins';\n\nconst api = create().use(responseSizeLimit(5 * 1024 * 1024)); // 5 MB limit\n\ntry {\n  const { data } = await api.get('/large-file');\n} catch (err) {\n  if (err instanceof ResponseSizeError) {\n    console.log(`Too large: ${err.actualBytes} bytes (limit: ${err.maxBytes})`);\n  }\n}\n```\n\n---\n\n## HTTP/2 and HTTP/3\n\n```bash\nnpm install undici\n```\n\nWhen `undici` is installed, kinetex automatically uses it in Node.js for HTTP/2 and HTTP/3 support. Bun uses its own optimized native fetch. Falls back to native `fetch` transparently when unavailable.\n\nConnections to each origin are pooled via a persistent `undici.Pool` (10 connections per origin, keep-alive 30s). This means multiple requests to the same host reuse the same TCP/TLS/HTTP2 session — true multiplexing, not just HTTP/2 framing.\n\n```ts\n// Force native fetch for one request (e.g. when streaming to a non-undici consumer)\nconst { data } = await kinetex.get('/api', { transport: 'fetch' });\n\n// Force undici for all requests on this instance\nconst api = create({ transport: 'undici' });\n\n// Opt out of automatic response decompression (get raw compressed bytes)\nconst { data } = await kinetex.get('/compressed', { decompress: false });\n```\n\n\u003e **Note:** connection pooling is per-process, not per-instance. All kinetex instances share the same `undici.Pool` for a given origin.\n\n---\n\n## Error Handling\n\nAll errors extend `KinetexError`. Catch the specific subclass you care about.\n\n```ts\nimport { HTTPError, TimeoutError, ValidationError, KinetexError } from 'kinetex';\n\ntry {\n  await kinetex.get('https://api.example.com/protected');\n} catch (err) {\n  if (err instanceof HTTPError) {\n    console.log(err.response?.status);   // 401, 404, 429 ...\n    console.log(err.response?.data);     // parsed error body\n    console.log(err.request.url);        // originating URL\n  } else if (err instanceof TimeoutError) {\n    console.log('Timed out:', err.message);\n  } else if (err instanceof ValidationError) {\n    console.log('Schema mismatch:', err.validationError);\n  } else if (err instanceof KinetexError) {\n    console.log('kinetex error:', err.originalCause);\n  }\n}\n```\n\n### Opt out of throwing on non-2xx responses\n\n```ts\nconst res = await kinetex.get('/api/data', { throwHttpErrors: false });\nif (res.status === 404) {\n  console.log('not found');\n}\n```\n\n**Error types:**\n\n| Class | When thrown |\n|-------|-------------|\n| `KinetexError` | Base class for all errors |\n| `HTTPError` | Non-2xx response (when `throwHttpErrors: true`) |\n| `TimeoutError` | Request exceeded `timeout` |\n| `ValidationError` | Response failed schema validation |\n| `ResponseSizeError` | Response exceeded `responseSizeLimit` |\n\n---\n\n## Request Deduplication\n\nIdentical in-flight GET requests on the **same instance** are collapsed into a single network call. All callers receive the same response.\n\n```ts\nconst api = create({ baseURL: 'https://api.example.com' });\n\n// All three fire simultaneously — only one HTTP request is made\nconst [r1, r2, r3] = await Promise.all([\n  api.get('/config'),\n  api.get('/config'),\n  api.get('/config'),\n]);\n\n// Opt out per-request\nawait api.get('/always-fresh', { dedupe: false });\n```\n\nDeduplication state is **instance-scoped** — two separate `create()` calls never share inflight maps.\n\n**Authorization-aware:** the dedup key includes the `Authorization` header value, so two requests to the same URL but with different credentials (e.g. different user tokens in a multi-tenant server) are never collapsed into the same in-flight request.\n\n---\n\n## Logging\n\nAttach a logger to an instance or individual request for structured observability.\n\n```ts\nimport { create } from 'kinetex';\nimport type { Logger } from 'kinetex';\n\nconst logger: Logger = {\n  request: (config) =\u003e console.log(`→ ${config.method} ${config.url}`),\n  response: (res)   =\u003e console.log(`← ${res.status} ${res.timing.duration}ms`),\n  error: (err)      =\u003e console.error(`✗ ${err.message}`),\n};\n\n// On an instance\nconst api = create({ logger });\n\n// Or per-request\nawait kinetex.get('/api/data', { logger });\n```\n\n---\n\n## Development\n\n```bash\ngit clone https://github.com/kinetexjs/kinetex.git\ncd kinetex \u0026\u0026 npm install\n\nnpm run typecheck      # TypeScript — 0 errors\nnpm run lint           # ESLint — 0 warnings\nnpm run build          # ESM + CJS + types + browser bundles\n\nnpm test               # ~149 tests, 0 failures (core suite)\nnpm run test:all       # ~460 tests, 0 failures (full suite)\nnpm run test:coverage  # coverage report (≥ 95% ESM line coverage)\nnpm run bench          # benchmark vs native fetch and node:http (add more: npm install --no-save axios got ky undici)\n\n# Run individual runtimes\nnpm run test:bun\nnpm run test:deno\n\n# Generate local API docs\nnpm run docs\nopen docs/index.html\n\n# Create a release\nnode scripts/release.mjs patch   # or minor / major\ngit push \u0026\u0026 git push --tags\n```\n\n---\n\n## License\n\n[MIT](LICENSE) © Qasim Ali\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fkinetexjs%2Fkinetex","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fkinetexjs%2Fkinetex","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fkinetexjs%2Fkinetex/lists"}