{"id":47983596,"url":"https://github.com/jasnell/new-streams","last_synced_at":"2026-04-04T11:24:38.281Z","repository":{"id":340987555,"uuid":"1139375731","full_name":"jasnell/new-streams","owner":"jasnell","description":"A proposal for a new streams API","archived":false,"fork":false,"pushed_at":"2026-03-20T05:12:19.000Z","size":861,"stargazers_count":407,"open_issues_count":6,"forks_count":5,"subscribers_count":7,"default_branch":"main","last_synced_at":"2026-03-20T19:14:03.969Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"TypeScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/jasnell.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2026-01-21T22:02:09.000Z","updated_at":"2026-03-20T15:21:33.000Z","dependencies_parsed_at":null,"dependency_job_id":null,"html_url":"https://github.com/jasnell/new-streams","commit_stats":null,"previous_names":["jasnell/new-streams"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/jasnell/new-streams","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jasnell%2Fnew-streams","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jasnell%2Fnew-streams/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jasnell%2Fnew-streams/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jasnell%2Fnew-streams/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/jasnell","download_url":"https://codeload.github.com/jasnell/new-streams/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jasnell%2Fnew-streams/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31397537,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-04T10:20:44.708Z","status":"ssl_error","status_checked_at":"2026-04-04T10:20:06.846Z","response_time":60,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2026-04-04T11:24:36.853Z","updated_at":"2026-04-04T11:24:38.260Z","avatar_url":"https://github.com/jasnell.png","language":"TypeScript","readme":"# Redesigning the Web Streams API\n\n**Date:** January 2026\n\nThis is a **prototype** of an alternative streams API and implementation designed to address\nfundamental ergonomics and performance flaws in the Web Streams API model.\n\nThis is a **work in progress** and should not be considered a final design.\nThe API and implementation are evolving as we explore design tradeoffs and gather feedback.\n\n**DO NOT USE THIS API IN PRODUCTION** — it is not stable and may change significantly.\n\n## Why\n\nThe WHATWG Streams Standard[^1] (\"Web streams\") provides a foundation for streaming data on the web\nplatform, but suffers from significant usability issues stemming from its design predating async\niteration and from attempting to serve too many use cases with a single complex abstraction. This\ndocument critiques Web Streams and presents design principles for a simpler, iterable-based\nalternative.\n\nA complete reference implementation is available in this repository demonstrating these principles.\nSee [DESIGN.md](https://github.com/jasnell/new-streams/blob/main/docs/DESIGN.md) for the API design and [API.md](API.md) for the complete API reference.\n\n## Table of Contents\n\n1. [Design Principles for Improvement](#1-design-principles-for-improvement)\n2. [Detailed Design and Reference Implementation](#2-detailed-design-and-reference-implementation)\n3. [Benchmarks](#3-benchmarks)\n\n## 1. Design Principles for Improvement\n\nThe new streams API follows these principles:\n\n### P1: Streams Are Just Iterables\n\nNo custom Stream class with hidden state. Streams are `AsyncIterable\u003cUint8Array[]\u003e` or `Iterable\u003cUint8Array[]\u003e` - standard JavaScript iteration protocols. The `for await...of` syntax is the idiomatic way to consume a stream.\n\n```javascript\n// Streams are standard async iterables\nfor await (const chunks of readable) {\n  for (const chunk of chunks) {\n    process(chunk);\n  }\n}\n```\n\n### P2: Transforms Are Pull-Through\n\nTransforms only execute when the consumer pulls. No eager evaluation, no hidden buffering. Data flows on-demand from source through transforms to consumer.\n\n```javascript\n// Nothing executes until iteration begins\nconst output = Stream.pull(source, compress, encrypt);\n\n// Transforms execute as we iterate\nfor await (const chunks of output) { /* ... */ }\n```\n\n### P3: Explicit Backpressure\n\nStrict by default, reject on overflow. All buffering has explicit limits with configurable overflow policies:\n\n- `'strict'` (default) - Writes reject when buffer full\n- `'block'` - Writes wait until space available\n- `'drop-oldest'` - Drop oldest buffered chunks to make room\n- `'drop-newest'` - Drop incoming chunks when buffer full\n\n```javascript\nconst { writer, readable } = Stream.push({\n  highWaterMark: 10,\n  backpressure: 'drop-oldest'\n});\n```\n\nPull-streams implement inherent backpressure by only producing data when the consumer requests it.\n\nPush-streams enforce strictly following backpressure policies on writes by default.\n\n### P4: Batched Chunks\n\nIterables yield `Uint8Array[]` (arrays of chunks) to amortize async overhead. This batching reduces promise creation per-chunk:\n\n```javascript\nfor await (const chunks of readable) {  // chunks is Uint8Array[]\n  for (const chunk of chunks) {         // chunk is Uint8Array\n    process(chunk);\n  }\n}\n```\n\nWriters accept `Uint8Array[]` for batched/vectorized writes:\n\n```javascript\nawait writer.writev([chunk1, chunk2, chunk3]); // Write multiple chunks at once\n```\n\n### P5: Explicit Multi-Consumer\n\nNo built-in `tee()` with hidden unbounded buffers. Instead, explicit multi-consumer patterns:\n\n- `Stream.broadcast()` - Push model with writer pushing to all consumers\n- `Stream.share()` - Pull model with shared source pulled on demand\n\nBoth require explicit buffer limits and backpressure policies:\n\n```javascript\n// Share with explicit buffer management\nconst shared = Stream.share(source, {\n  highWaterMark: 100,\n  backpressure: 'strict'\n});\nconst consumer1 = shared.pull();\nconst consumer2 = shared.pull(decompress);\n```\n\nFor both models, a single cursor-based queue manages data flow to multiple consumers, ensuring predictable memory usage.\n\n### P6: Clean Sync/Async Separation\n\nComplete parallel sync versions for CPU-bound workloads. No ambiguity about which path executes:\n\n| Async | Sync |\n|-------|------|\n| `Stream.pull()` | `Stream.pullSync()` |\n| `Stream.pipeTo()` | `Stream.pipeToSync()` |\n| `Stream.bytes()` | `Stream.bytesSync()` |\n| `Stream.text()` | `Stream.textSync()` |\n| `Stream.share()` | `Stream.shareSync()` |\n\nThe design allows for a synchronous fast path when all components are synchronous, eliminating unnecessary promise overhead.\n\n### P7: Non-Negative desiredSize\n\nUnlike Web Streams, `desiredSize` is always \u003e= 0 or null (closed). The API enforces strict backpressure semantics - no negative values indicating \"over capacity.\"\n\n### P8: Bytes Only\n\nThe API deals exclusively with bytes (`Uint8Array`). Strings are UTF-8 encoded automatically. No \"value streams\" - use async iterables directly for streaming arbitrary JS values.\n\n### P9: Chunk-Oriented, Not Byte-Oriented\n\nOperations work on chunks (contiguous byte sequences) rather than individual bytes. No BYOB (bring your own buffer), no min/max read sizes, no partial fill handling.\n\n| Approach        | Overhead                                  | Complexity                           |\n| --------------- | ----------------------------------------- | ------------------------------------ |\n| Byte-oriented   | Per-byte processing, frequent allocations | High (BYOB requests, partial fills)  |\n| Chunk-oriented  | Per-chunk processing, batch allocations   | Low (simple iteration over chunks)   |\n\n### P10: No Transfer/Detach Semantics\n\nThe API does not automatically transfer or detach buffers. Use `ArrayBuffer.transfer()` explicitly when needed. This keeps the API simple and predictable while allowing developers to opt into transfer semantics when performance requires it.\n\n### P11: No forced/hidden promise chains\n\nImplementers may optimize away promise chains when operations complete synchronously. The API does not mandate promise creation in all cases, allowing for efficient synchronous fast paths.\n\n---\n\n## 2. Detailed Design and Reference Implementation\n\nA complete reference implementation is available in this repository demonstrating the API design principles above. The implementation is tested (194 tests) and benchmarked against Web Streams.\n\n### Documentation\n\n| Document | Description |\n|----------|-------------|\n| [DESIGN.md](https://github.com/jasnell/new-streams/blob/main/docs/DESIGN.md) | Comprehensive API design document covering push streams, pull pipelines, transforms, consumers, multi-consumer patterns, and protocol extensibility |\n| [API.md](API.md) | Complete API reference for the reference implementation |\n| [USAGE.md](https://github.com/jasnell/new-streams/blob/main/docs/USAGE.md) | Guide to using the API with code examples |\n| [REQUIREMENTS.md](https://github.com/jasnell/new-streams/blob/main/docs/REQUIREMENTS.md) | Detailed list of testable assertions |\n| [MIGRATION.md](https://github.com/jasnell/new-streams/blob/main/docs/MIGRATION.md) | Guide for migrating from Web Streams API to this API |\n| [MIGRATION-NODEJS.md](https://github.com/jasnell/new-streams/blob/main/docs/MIGRATION-NODEJS.md) | Guide for migrating from Node.js streams to this API |\n| [DESIGN-TRADEOFFS.md](https://github.com/jasnell/new-streams/blob/main/docs/DESIGN-TRADEOFFS.md) | Discussion of design tradeoffs and rationale |\n| [COMPLETENESS-ANALYSIS.md](https://github.com/jasnell/new-streams/blob/main/docs/COMPLETENESS-ANALYSIS.md) | Analysis of feature completeness |\n| [TRANSFER-INTEGRATION.md](https://github.com/jasnell/new-streams/blob/main/docs/TRANSFER-INTEGRATION.md) | Design discussion: Transfer Protocol integration for ownership semantics |\n\n### Code\n\n| Resource | Description |\n|----------|-------------|\n| [src/](src/) | TypeScript source code (types.ts, push.ts, from.ts, pull.ts, consumers.ts, broadcast.ts, share.ts) |\n| [samples/](samples/) | Sample files demonstrating API usage patterns |\n| [benchmarks/](benchmarks/) | Benchmark suites comparing performance with Web Streams |\n\n### Installing from npm\n\n```bash\nnpm install new-streams\n```\n\n### Running the Reference Implementation\n\n```bash\n# Install dependencies\nnpm install\n\n# Build\nnpm run build\n\n# Run tests (194 tests)\nnpm test\n\n# Run samples\nnpx tsx samples/01-basic-creation.ts\n\n# Run benchmarks\nnpm run benchmark\n\n# Run html samples/benchmark server\nnpm run html-samples\n```\n\n---\n\n## 3. Benchmarks\n\nNote that these numbers are illustrative and will vary by environment and\nimplementation status. They are provided here to demonstrate the performance\ncharacteristics of the new streams API compared to Web Streams and Node.js\nstreams and should not be taken as definitive.\n\nNo assertion is made that these benchmarks represent real-world workloads.\nThere is no intention to claim that the new streams API is universally faster than\nany specific Web streams or Node.js streams implementation in all scenarios.\nThe benchmarks were run on a MacBook Pro (M1 Pro, 16GB RAM) using Node.js v24.x.\n\n```\n──────────────────────────────────────────────────────────────────────\nRunning: 01-throughput.ts\n──────────────────────────────────────────────────────────────────────\n\nBenchmark: Raw Throughput\nMeasuring data flow speed through streams\nComparing: New Streams vs Web Streams vs Node.js Streams\nNew API uses batched iteration (Uint8Array[]) for amortized overhead\n(minimum 20 samples, 3 seconds per test)\n\nRunning: Large chunks...\nRunning: Medium chunks...\nRunning: Small chunks...\nRunning: Tiny chunks...\nRunning: Async iteration...\nRunning: Generator source...\n\n==================================================================================================================================\nBENCHMARK RESULTS (higher throughput = better)\n==================================================================================================================================\nScenario                         | New Stream         | Web Stream         | Node Stream        | New vs Web           | New vs Node\n---------------------------------+--------------------+--------------------+--------------------+----------------------+---------------------\nLarge chunks (64KB x 500)        | 4.87 GB/s          | 6.03 GB/s          | 5.83 GB/s          | ~same                | ~same\nMedium chunks (8KB x 2000)       | 5.72 GB/s          | 4.72 GB/s          | 3.83 GB/s          | ~same                | ~same\nSmall chunks (1KB x 5000)        | 4.64 GB/s          | 2.09 GB/s          | 779.10 MB/s        | 2.22x faster         | 5.96x faster\nTiny chunks (100B x 10000)       | 1.18 GB/s          | 275.86 MB/s        | 124.15 MB/s        | 4.28x faster         | 9.50x faster\nAsync iteration (8KB x 1000)     | 310.78 GB/s        | 19.05 GB/s         | 12.39 GB/s         | 16.31x faster        | 25.08x faster\nGenerator source (8KB x 1000)    | 19.71 GB/s         | 14.29 GB/s         | 9.88 GB/s          | ~same                | 1.99x faster\n==================================================================================================================================\n\nNew Stream vs Web Stream: 3 faster, 0 slower, 3 within noise\nNew Stream vs Node Stream: 4 faster, 0 slower, 2 within noise\nSamples per benchmark: 100\n\n──────────────────────────────────────────────────────────────────────\nRunning: 02-push-streams.ts\n──────────────────────────────────────────────────────────────────────\n\nBenchmark: Push Stream Performance\nMeasuring concurrent write/read patterns\nComparing: New Streams vs Web Streams vs Node.js Streams\n(minimum 15-20 samples, 3 seconds per test)\n\nRunning: Concurrent push (medium chunks)...\nRunning: Many small writes...\nRunning: Batch writes...\nRunning: Push + async iteration...\n\n==================================================================================================================================\nBENCHMARK RESULTS (higher throughput = better)\n==================================================================================================================================\nScenario                         | New Stream         | Web Stream         | Node Stream        | New vs Web           | New vs Node\n---------------------------------+--------------------+--------------------+--------------------+----------------------+---------------------\nConcurrent push (4KB x 1000)     | 174.92 MB/s        | 181.45 MB/s        | 166.52 MB/s        | ~same                | ~same\nMany small writes (64B x 10000)  | 111.66 MB/s        | 85.41 MB/s         | 77.15 MB/s         | 1.31x faster         | ~same\nBatch writes (512B x 20 x 200)   | 137.98 MB/s        | 145.09 MB/s        | 136.07 MB/s        | ~same                | ~same\nPush + async iter (2KB x 1000)   | 162.86 MB/s        | 166.54 MB/s        | 162.72 MB/s        | ~same                | ~same\n==================================================================================================================================\n\nNew Stream vs Web Stream: 1 faster, 0 slower, 3 within noise\nNew Stream vs Node Stream: 0 faster, 0 slower, 4 within noise\nSamples per benchmark: 100\n\n──────────────────────────────────────────────────────────────────────\nRunning: 03-transforms.ts\n──────────────────────────────────────────────────────────────────────\n\nBenchmark: Transform Performance\nMeasuring data transformation speed\nComparing: New Streams vs Web Streams vs Node.js Streams\n(minimum 15-20 samples, 3 seconds per test)\n\nRunning: Identity transform...\nRunning: XOR transform...\nRunning: Expanding transform...\nRunning: Chained transforms...\nRunning: Async transform...\n\n==================================================================================================================================\nBENCHMARK RESULTS (higher throughput = better)\n==================================================================================================================================\nScenario                         | New Stream         | Web Stream         | Node Stream        | New vs Web           | New vs Node\n---------------------------------+--------------------+--------------------+--------------------+----------------------+---------------------\nIdentity transform (8KB x 1000)  | 291.01 GB/s        | 4.89 GB/s          | 4.77 GB/s          | 59.46x faster        | 61.00x faster\nXOR transform (8KB x 500)        | 1.45 GB/s          | 1.08 GB/s          | 1.12 GB/s          | ~same                | ~same\nExpanding 1:2 (4KB x 500)        | 70.51 GB/s         | 4.38 GB/s          | 5.84 GB/s          | 16.08x faster        | 12.07x faster\nChained 3x (8KB x 500)           | 163.89 GB/s        | 2.02 GB/s          | 4.47 GB/s          | 81.24x faster        | 36.64x faster\nAsync transform (8KB x 300)      | 132.62 GB/s        | 4.31 GB/s          | 4.67 GB/s          | 30.78x faster        | 28.40x faster\n==================================================================================================================================\n\nNew Stream vs Web Stream: 4 faster, 0 slower, 1 within noise\nNew Stream vs Node Stream: 4 faster, 0 slower, 1 within noise\nSamples per benchmark: 100\n\n──────────────────────────────────────────────────────────────────────\nRunning: 04-pipelines.ts\n──────────────────────────────────────────────────────────────────────\n\nBenchmark: Pipeline Performance\nMeasuring full pipeline throughput\nComparing: New Streams vs Web Streams vs Node.js Streams\n(minimum 15-20 samples, 3 seconds per test)\n\nRunning: Simple pipeline...\nRunning: Pipeline with transform...\nRunning: Multi-stage pipeline...\nRunning: High-frequency small chunks...\n\n==================================================================================================================================\nBENCHMARK RESULTS (higher throughput = better)\n==================================================================================================================================\nScenario                         | New Stream         | Web Stream         | Node Stream        | New vs Web           | New vs Node\n---------------------------------+--------------------+--------------------+--------------------+----------------------+---------------------\nSimple pipeline (8KB x 1000)     | 408.26 GB/s        | 8.63 GB/s          | 48.04 GB/s         | 47.30x faster        | 8.50x faster\nPipeline + XOR (8KB x 1000)      | 1.47 GB/s          | 1.16 GB/s          | 1.13 GB/s          | ~same                | ~same\nMulti-stage 3x (8KB x 500)       | 180.55 GB/s        | 2.09 GB/s          | 4.36 GB/s          | 86.51x faster        | 41.39x faster\nHigh-freq (64B x 20000)          | 2.97 GB/s          | 207.93 MB/s        | 180.83 MB/s        | 14.30x faster        | 16.44x faster\n==================================================================================================================================\n\nNew Stream vs Web Stream: 3 faster, 0 slower, 1 within noise\nNew Stream vs Node Stream: 3 faster, 0 slower, 1 within noise\nSamples per benchmark: 100\n\n──────────────────────────────────────────────────────────────────────\nRunning: 05-tee-branching.ts\n──────────────────────────────────────────────────────────────────────\n\nBenchmark: Share/Branching Performance\nMeasuring stream branching efficiency\nNew API: share() for pull-model multi-consumer\nWeb Streams: tee() for branching\n(minimum 15-20 samples, 3 seconds per test)\n\nRunning: Single share (2 readers)...\nRunning: Share with transforms...\nRunning: Small chunks share...\n\n==============================================================================================================\nBENCHMARK RESULTS (higher throughput = better)\n==============================================================================================================\nScenario                         | New Stream             | Web Stream             | Difference         | Significance\n---------------------------------+------------------------+------------------------+--------------------+---------------\nShare 2 readers (4KB x 500)      | 2.53 GB/s              | 2.46 GB/s              | 1.03x faster       | within noise\nShare + transforms (4KB x 500)   | 887.61 MB/s            | 539.06 MB/s            | 1.65x faster       | within noise\nSmall chunks share (256B x 5000  | 2.20 GB/s              | 168.88 MB/s            | 13.01x faster      | significant\n==============================================================================================================\n\nSummary: 1 faster, 0 slower, 2 within noise\nSamples per benchmark: 100\n\n──────────────────────────────────────────────────────────────────────\nRunning: 06-consumption.ts\n──────────────────────────────────────────────────────────────────────\n\nBenchmark: Consumption Methods\nMeasuring different ways to read stream data\nComparing: New Streams vs Web Streams vs Node.js Streams\n(minimum 20 samples, 3 seconds per test)\n\nRunning: bytes() collection...\nRunning: text() decode...\nRunning: Async iteration...\nRunning: Direct iterator consumption...\n\n==================================================================================================================================\nBENCHMARK RESULTS (higher throughput = better)\n==================================================================================================================================\nScenario                         | New Stream         | Web Stream         | Node Stream        | New vs Web           | New vs Node\n---------------------------------+--------------------+--------------------+--------------------+----------------------+---------------------\nbytes() (16KB x 500)             | 4.47 GB/s          | 8.47 GB/s          | 7.03 GB/s          | ~same                | ~same\ntext() (1KB chunks)              | 1.68 GB/s          | 856.26 MB/s        | 857.42 MB/s        | ~same                | ~same\nAsync iteration (8KB x 1000)     | 370.00 GB/s        | 21.50 GB/s         | 23.20 GB/s         | 17.21x faster        | 15.95x faster\nIterator loop (8KB x 1000)       | 277.88 GB/s        | 27.84 GB/s         | 24.45 GB/s         | 9.98x faster         | 11.36x faster\n==================================================================================================================================\n\nNew Stream vs Web Stream: 2 faster, 0 slower, 2 within noise\nNew Stream vs Node Stream: 2 faster, 0 slower, 2 within noise\nSamples per benchmark: 100\n\n──────────────────────────────────────────────────────────────────────\nRunning: 07-sync-generators.ts\n──────────────────────────────────────────────────────────────────────\n\nBenchmark: Sync vs Async Sources\nComparing array sources vs async generators\n(minimum 20 samples, 3 seconds per test)\n\nRunning: Large chunks (sync vs async source)...\nRunning: Medium chunks (sync vs async source)...\nRunning: Small chunks (sync vs async source)...\nRunning: Tiny chunks (sync vs async source)...\nRunning: Many tiny chunks (extreme case)...\n\n==============================================================================================================\nBENCHMARK RESULTS (higher throughput = better)\n==============================================================================================================\nScenario                         | New Stream             | Web Stream             | Difference         | Significance\n---------------------------------+------------------------+------------------------+--------------------+---------------\nLarge chunks (64KB x 500)        | 4.04 GB/s              | 4.15 GB/s              | 1.03x slower       | within noise\nMedium chunks (8KB x 2000)       | 6.36 GB/s              | 7.28 GB/s              | 1.14x slower       | within noise\nSmall chunks (1KB x 5000)        | 5.30 GB/s              | 2.50 GB/s              | 2.12x faster       | within noise\nTiny chunks (100B x 10000)       | 2.37 GB/s              | 311.45 MB/s            | 7.61x faster       | significant\nMany tiny (10B x 50000)          | 317.52 MB/s            | 29.33 MB/s             | 10.83x faster      | significant\n==============================================================================================================\n\nSummary: 2 faster, 0 slower, 3 within noise\nSamples per benchmark: 100\n\n──────────────────────────────────────────────────────────────────────\nRunning: 08-pipeline-sync.ts\n──────────────────────────────────────────────────────────────────────\n\nBenchmark: pullSync vs pull vs Web Streams\nComparing synchronous pipeline processing performance\n(minimum 30 samples, 3 seconds per test)\n\nRunning: Passthrough (no transforms)...\nRunning: Single transform...\nRunning: Chain of 3 transforms...\nRunning: Many small chunks...\nRunning: Tiny chunks (extreme case)...\n\n==================================================================================================================================\nBENCHMARK RESULTS - Sync Pipeline Comparison\n==================================================================================================================================\nScenario                     | pullSync           | pull (async)       | Web Streams        | Sync vs Async    | Sync vs Web\n-----------------------------+--------------------+--------------------+--------------------+------------------+-----------------\nPassthrough (8KB x 1000)     | 335.42 GB/s        | 293.19 GB/s        | 24.22 GB/s         | 1.1x faster      | 13.8x faster\nSingle transform (4KB x 100  | 224.82 MB/s        | 229.66 MB/s        | 194.17 MB/s        | 1.0x slower      | 1.2x faster\n3 transforms (4KB x 500)     | 401.01 MB/s        | 398.66 MB/s        | 292.58 MB/s        | 1.0x faster      | 1.4x faster\nSmall chunks (256B x 10000)  | 16.82 GB/s         | 21.57 GB/s         | 182.21 MB/s        | 1.3x slower      | 92.3x faster\nTiny chunks (64B x 20000)    | 7.11 GB/s          | 7.19 GB/s          | 201.79 MB/s        | 1.0x slower      | 35.2x faster\n==================================================================================================================================\n\nAverage speedup of pullSync vs pull: 1.0x\nAverage speedup of pullSync vs Web Streams: 28.8x\n\nNote: pullSync returns a sync generator (no async overhead)\n      pull and Web Streams use async iteration\n\n──────────────────────────────────────────────────────────────────────\nRunning: 09-sync-async-comparison.ts\n──────────────────────────────────────────────────────────────────────\n\nBenchmark: Sync vs Async API Comparison\nComparing sync path (fromSync + pullSync + bytesSync) vs async path\n(minimum 30 samples, 3 seconds per test)\n\nRunning: bytes() consumption...\nRunning: text() consumption...\nRunning: Pipeline with identity transform...\nRunning: Iteration consumption...\nRunning: Many tiny chunks...\n\n==================================================================================================================================\nBENCHMARK RESULTS - Sync vs Async API Comparison\n==================================================================================================================================\nScenario                     | Sync Path        | Async (sync src) | Async (async src) | Sync vs Async    | Sync vs AsyncGen\n-----------------------------+------------------+------------------+------------------+------------------+-----------------\nbytes() (8KB x 2000)         | 5.00 GB/s        | 4.76 GB/s        | 7.00 GB/s        | 1.0x faster      | 1.4x slower\ntext() (1KB chunks)          | 1.71 GB/s        | 2.03 GB/s        | 1.11 GB/s        | 1.2x slower      | 1.5x faster\nTransform (4KB x 1000)       | 5.46 GB/s        | 6.02 GB/s        | 3.10 GB/s        | 1.1x slower      | 1.8x faster\nIteration (8KB x 1000)       | 281.11 GB/s      | 371.53 GB/s      | 24.27 GB/s       | 1.3x slower      | 11.6x faster\nTiny chunks (64B x 20000)    | 1.79 GB/s        | 2.11 GB/s        | 189.77 MB/s      | 1.2x slower      | 9.4x faster\n==================================================================================================================================\n\nAverage speedup of sync path vs async (sync source): 0.9x\nAverage speedup of sync path vs async (async generator): 5.0x\n\nRecommendation: Use sync APIs (fromSync, pullSync, bytesSync) when source data is synchronously available.\n\n──────────────────────────────────────────────────────────────────────\nRunning: 10-advanced-features.ts\n──────────────────────────────────────────────────────────────────────\n\nBenchmark: Advanced Features Performance\nTests: broadcast vs share, merge, pipeTo, backpressure policies, stateful transforms\n(minimum 15 samples, 2-3 seconds per test)\n\n\n--- Section 1: broadcast() vs share() ---\nRunning: broadcast() with 2 consumers...\nRunning: broadcast/share with transforms...\n\n--- Section 2: merge() ---\nRunning: merge() 2 streams...\nRunning: merge() 4 streams...\n\n--- Section 3: pipeTo() ---\nRunning: pipeTo() async...\nRunning: pipeToSync()...\n\n--- Section 4: Backpressure Policies ---\nRunning: backpressure policy comparison...\n\n--- Section 5: Stateful Transforms ---\nRunning: stateful vs stateless transforms...\n\n==============================================================================================================\nBENCHMARK RESULTS (higher throughput = better)\n==============================================================================================================\nScenario                         | New Stream             | Web Stream             | Difference         | Significance\n---------------------------------+------------------------+------------------------+--------------------+---------------\nbroadcast vs share (2 consumers  | 1.10 GB/s              | 2.36 GB/s              | 2.15x slower       | within noise\nbroadcast vs share (w/ transfor  | 649.16 MB/s            | 879.53 MB/s            | 1.35x slower       | within noise\nmerge() 2 streams (4KB x 250 ea  | 5.78 GB/s              | 5.35 GB/s              | 1.08x faster       | within noise\npipeTo vs pipeTo (4KB x 500)     | 50.71 GB/s             | 4.44 GB/s              | 11.43x faster      | significant\npipeToSync vs pipeTo             | 92.68 GB/s             | 42.19 GB/s             | 2.20x faster       | within noise\nStateless vs Stateful transform  | 1.00 GB/s              | 3.40 GB/s              | 3.40x slower       | significant\n==============================================================================================================\n\nSummary: 1 faster, 1 slower, 4 within noise\nSamples per benchmark: 100\n\n================================================================================\nMerge 4 Streams\n================================================================================\nTest                                     | Time               | Throughput\n-----------------------------------------+--------------------+-------------------\nNew Stream merge(4)                      | 324.79µs           | 6.31 GB/s\n================================================================================\n\n================================================================================\nBackpressure Policies (broadcast)\n================================================================================\nTest                                     | Time               | Throughput\n-----------------------------------------+--------------------+-------------------\nstrict                                   | 296.41µs           | 3.45 GB/s\ndrop-oldest                              | 267.54µs           | 3.83 GB/s\ndrop-newest                              | 255.00µs           | 4.02 GB/s\n================================================================================\n```\n","funding_links":[],"categories":["⚙️ Backend \u0026 APIs"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fjasnell%2Fnew-streams","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fjasnell%2Fnew-streams","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fjasnell%2Fnew-streams/lists"}