{"id":30180940,"url":"https://github.com/streamich/node-multicore","last_synced_at":"2025-08-12T08:06:27.446Z","repository":{"id":151164721,"uuid":"623694444","full_name":"streamich/node-multicore","owner":"streamich","description":"Delightful multicore programming in Node.js","archived":false,"fork":false,"pushed_at":"2023-12-15T05:42:52.000Z","size":208,"stargazers_count":3,"open_issues_count":1,"forks_count":1,"subscribers_count":2,"default_branch":"main","last_synced_at":"2025-08-09T12:15:14.125Z","etag":null,"topics":["concurrency","multicore","multithreading","parallel","parallel-computing","parallel-programming","threadpool","threads","workers"],"latest_commit_sha":null,"homepage":"","language":"TypeScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"unlicense","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/streamich.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null}},"created_at":"2023-04-04T22:20:39.000Z","updated_at":"2024-06-27T15:40:29.000Z","dependencies_parsed_at":null,"dependency_job_id":"70fbb43e-dd1e-43e9-b608-4b837a0359cd","html_url":"https://github.com/streamich/node-multicore","commit_stats":{"total_commits":126,"total_committers":1,"mean_commits":126.0,"dds":0.0,"last_synced_commit":"11ba2e176fb59177cad065f558816b79904c5486"},"previous_names":[],"tags_count":7,"template":false,"template_full_name":null,"purl":"pkg:github/streamich/node-multicore","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/streamich%2Fnode-multicore","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/streamich%2Fnode-multicore/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/streamich%2Fnode-multicore/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/streamich%2Fnode-multicore/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/streamich","download_url":"https://codeload.github.com/streamich/node-multicore/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/streamich%2Fnode-multicore/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":269760527,"owners_count":24471518,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-08-10T02:00:08.965Z","response_time":71,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["concurrency","multicore","multithreading","parallel","parallel-computing","parallel-programming","threadpool","threads","workers"],"created_at":"2025-08-12T08:06:11.697Z","updated_at":"2025-08-12T08:06:27.438Z","avatar_url":"https://github.com/streamich.png","language":"TypeScript","funding_links":[],"categories":[],"sub_categories":[],"readme":"# Node Multicore\n\nParallel programming for Node.js made easy. Make any CommonJs or ESM module\nrun in a thread pool.\n\n- __Global thread pool:__ designed to be a shared thread pool for all NPM packages.\n- __Custom threads pools:__ create a custom thread pool, if you need to.\n- __Instant start__: starts with 0 threads and scales as the load increases.\n- __Instant module loading:__ load modules to the thread pool dynamically and\n  instantly\u0026mdash;module is loaded in more threads as the module concurrency\n  increases.\n- __Channels:__ each function invocation creates a bi-directional data channel,\n  which allows you to stream data to a worker thread and back to the main thread.\n- __Pin work to a thread:__ ability to pin a module to a single thread. Say,\n  your thread holds state\u0026mdash;you can pin execution to a single thread, making\n  subsequent method call hit the same thread.\n- __Single function module:__ quickly create a single function modules by just\n  defining the function in your code.\n- __Dynamic:__ pool size grows as the concurrency rises, dead threads are replaced by new ones.\n- __Fast:__ Node Multicore is as fast, see benchmarks below.\n\n### Table of contents\n\n- [Getting started](#getting-started)\n- [The thread pool](#the-thread-pool)\n  - [The global thread pool](#the-global-thread-pool)\n  - [Creating a custom thread pool](#creating-a-custom-thread-pool)\n- [Modules](#modules)\n  - [Static modules](#static-modules)\n  - [Single function modules](#single-function-modules)\n  - [Dynamic CommonJs modules](#dynamic-commonjs-modules)\n  - [*Module Expressions*](#module-expressions)\n- [Module exports](#module-exports)\n  - [Functions](#functions)\n  - [Channels](#channels)\n  - [Promises](#promises)\n  - [Other exports](#other-exports)\n- [Advanced concepts](#advanced-concepts)\n  - [Pinning a thread](#pinning-a-thread)\n  - [Transferring data by ownership](#transferring-data-by-ownership)\n- [Multicore packages](#multicore-packages)\n- [Demo / Benchmark](#demo--benchmark)\n\n\n## Getting started\n\nInstall the package\n\n```\nnpm install node-multicore\n```\n\nCreate a `module.ts` that should be executed in the thread pool\n\n```ts\nimport {WorkerFn} from 'node-multicore';\n\nexport const add: WorkerFn\u003c[number, number], number\u003e = ([a, b]) =\u003e {\n  return a + b;\n};\n```\n\nLoad your module from the main thread\n\n```ts\nimport {resolve} from 'path';\nimport {pool} from 'node-multicore';\n\nconst specifier = resolve(__dirname, 'module');\ntype Methods = typeof import('./module');\n\nconst math = pool.module(specifier).typed\u003cMethods\u003e();\n```\n\nNow call your methods from the main thread\n\n```ts\nconst result = await math.exec('add', [1, 2]); // 3\n```\n\n\n## The thread pool\n\n### The global thread pool\n\nThe `node-multicore` thread pool is designed to be a single shared global thread\npool for all compute intensive NPM packages. You can import it as follows:\n\n```ts\nimport {pool} from 'node-multicore';\n```\n\nThe global thread pool starts with 0 threads and scales up to the number of CPUs\nless 1, as the load increases. This is a design decision as this way the global\nthread pool avoids overloading the CPU with threads. You can customize the\nminimum and maxium number of threads in the thread pool using the `MC_MIN_THREAD_POOL_SIZE`\nand `MC_MAX_THREAD_POOL_SIZE` environment variables.\n\n\n### Creating a custom thread pool\n\nThe thread pool is designed to be a shared resource, so it is not\nrecommended to create your own pool. However, if you need to create a separate\none, you can:\n\n```ts\nimport {WorkerPool} from 'node-multicore';\n\nconst dedicatedPool = new WorkerPool({});\n\n// Instantiate the minimum number of threads\nawait dedicatedPool.init();\n```\n\nWhen creating a thread pool, you can pass the following options:\n\n- `min` \u0026mdash; minimum number of threads in the pool, defaults to `0` or\n  `process.env.MC_MIN_THREAD_POOL_SIZE` environment setting.\n- `max` \u0026mdash; maximum number of threads in the pool, defaults to the number of\n  CPUs less 1 or `process.env.MC_MAX_THREAD_POOL_SIZE` environment setting.\n- `trackUnmanagedFds` \u0026mdash; whether to track unmanaged file descriptors in\n  worker threads and close them when the thread is terminated. Defaults to `false`.\n- `name` \u0026mdash; name of the thread pool, used for debugging purposes. Defaults\n  to `multicore`.\n- `resourceLimits` \u0026mdash; resource limits for worker threads.\n- `env` \u0026mdash; environment variables for worker threads. Defaults to\n  `process.env`.\n\n\n## Modules\n\nA unit of parallelism in JavaScript is a module. You can load a module in the\nthread pool and call its exported functions.\n\nSimilar to the thread pool, each module is designed to be \"lazy\" as well. A\nmodule is not loaded in any of the threads initially, but as the module\nconcurrency rises, the module is gradually loaded in more worker threads.\n\n\n### Static modules\n\nThis is the preferred way to use this library, it will load a module by a global\n\"specifier\" `pool.module(specifier)` in the thread pool and you can call its\nexported functions.\n\nTo begin, first create a module you want to be loaded in the thread pool, put it\nin a `module.ts` file:\n\n```ts\nexport const add = ([a, b]) =\u003e a + b;\n```\n\nNow add your module to the thread pool:\n\n```ts\nimport {resolve} from 'path';\n\nconst specifier = resolve(__dirname, 'module');\nconst module = pool.module(specifier);\n```\n\nTo add TypeScript support, you can use the `typed()` method:\n\n```ts\nconst typed = module.typed\u003ctypeof import('./module')\u003e();\n```\n\nThis will create a type-safe wrapper, which knows the types of the exported\nfunctions. You can now call the exported functions from the module in one of the\nfollowing ways:\n\n#### Using the `.exec()` method\n\nThis will execute the function in one of the threads in the thread pool and\nreturn the result as a promise.\n\n```ts\nconst result = await typed.exec('add', [1, 2]); // 3\n```\n\n#### Using the `.ch()` method\n\nEvery function call creates a channel, which is a duplex stream (more on that\nlater). By calling the `.ch()` method, you can get a reference to the channel.\n\nYou can get the final result of the function call from the `.result` promise:\n\n```ts\nconst result = await typed.ch('add', [1, 2]).result; // 3\n```\n\n\n#### Using the `.api()` builder\n\nYou can construct an \"API\" object of your module using the `.api()` method.\n\n```ts\nconst api = typed.api();\n```\n\nThis returns an object of all the exported functions, which you can call:\n\n```ts\nconst result = await api.add(1, 2).result; // 3\n```\n\n\n#### Using the `.fn()` closure\n\nTo use this method you need to make sure that you module is loaded in at least\none thread. You can achieve that by calling the `module.init()` method.\n\n```ts\nawait module.init();\n```\n\nNow you can create a closure for you function\n\n```ts\nconst add = typed.fn('add');\n```\n\nand run it as a function (it returns a channel)\n\n```ts\nconst result = await add(1, 2).result; // 3\n```\n\n\n### Single function modules\n\nThe `fun()` method will create a module out of a single function and load it in\nthe main thread pool.\n\n```ts\nimport {fun} from 'node-multicore';\n\nconst fn = fun((a: number, b: number) =\u003e a + b);\n\nconst result = await fn(1, 2); // 3\n```\n\nNote, when using the `fun()` method do not get access to the underlying channel\nand you can specify all function arguments in function call `fn(1, 2)` instead of\nas an array `fn([1, 2])`.\n\nUnder the hood, the `fun()` method creates a module with a single function. You\ncan achieve that manually as well:\n\n```ts\nconst module = pool.fun((a: number, b: number) =\u003e a + b);\n```\n\nNow the `module` object is just like any other module, the single function is\nexported as `default`.\n\nNote, you function cannot access any variables outside of its scope.\n\n\n### Dynamic CommonJs modules\n\nYou can load a CommonJs module from a string. This is useful if you want to\nload a module dynamically. It is loaded into threads progressively, as the\nmodule concurrency rises. After you are done with the module, you can unload it.\n\nCreate a CommonJs text module:\n\n```ts\nimport {pool} from '..';\n\nconst text = /* js */ `\n\nlet state = 0;\n\nexports.add = ([a, b]) =\u003e {\n  return a + b;\n}\n\nexports.set = (value) =\u003e state = value;\nexports.get = () =\u003e state;\n\n`;\n```\n\nLoad it using the `pool.cjs()` method:\n\n```ts\nconst module = pool.cjs(text);\n```\n\nNow you can use it as any other module:\n\n```ts\n// Execute a function exported by the module\nconst result = await module.exec('add', [1, 2]);\nconsole.log(result); // 3\n\n// Pin module to a single random thread, so multiple calls access the same state\nconst pinned = module.pinned();\nawait pinned.ch('set', 123).result;\nconst get = await pinned.ch('get', void 0).result;\nconsole.log(get); // 123\n```\n\nOnce you don't need this module, you can unload it:\n\n```ts\n// Unload the module, once it's no longer needed\nawait module.unload();\n// await module.exec will throw an error now\n```\n\nRun a demo with the following command:\n\n```bash\nnode -r ts-node/register src/demo/cjs-text.ts\n```\n\n\n### *Module Expressions*\n\n[ECMAScript *Module Expressions*](https://github.com/tc39/proposal-module-expressions)\nproposal will allow to create anonymous modules at runtime, which can then be\ncopied to other threads. This library will support this proposal once it is\nimplemented in Node.js.\n\n\n## Module exports\n\nModules are loaded in worker threads and their exports become available in the\nmain thread. Below we describe how different types of exports are handled.\n\n\n### Functions\n\nThe most common export is a function, which receives a single \"payload\" argument.\nThe function can be async as well as synchronous. The return value of the function\nis sent back to the main thread.\n\n```ts\nimport {WorkerFn} from 'node-multicore';\n\nexport const add: WorkerFn\u003c[a: number, b: number], number\u003e = ([a, b]) =\u003e {\n  return a + b;\n};\n```\n\n\n### Channels\n\nChannels are functions, which accept 2 or 3 arguments. The first argument is a\n\"payload\" argument, which is the same as for regular functions. The next two\narguments are \"send\" and \"receive\" methods, which can be used to send and receive\ndata from the main thread.\n\n```ts\nimport {WorkerCh, taker} from 'node-multicore';\n\nexport const addThreeNumbers: WorkerCh\u003cnumber, number, number, void\u003e = async (one, send, recv) =\u003e {\n  const take = taker(recv);\n  const two = await take();\n  const three = await take();\n  return one + two + three;\n};\n```\n\nThe channel is open until the function returns. You can use the `taker()` helper\nto create a function, which will wait for the next value from the channel.\n\n\n### Promises\n\nIf module exports a promise, when called from the main thread the promise will\nbe resolved first and then: (1) if the promise resolves to a function, the\nfunction will be called with the payload argument, (2) if the promise resolves\nto anything else, the value will be returned.\n\n\n### Other exports\n\nAll other exports are returned to the main thread as is, using the `postMessage`\ncopy algorithm.\n\n\n## Advanced concepts\n\n### Pinning a thread\n\nSometimes your threads need to share state. In that case you may want to pin\na series of module calls to the same thread. You can do that by calling the\n`pinned()` method on a module.\n\n```ts\nconst pinned = module.pinned();\n```\n\nThen use the `pinned` object to call the module functions:\n\n```ts\nconst result = await pinned.ch('add', [1, 2]).result;\n``` \n\nAll calls through the `pinned` instance will be executed on the same thread.\n\n\n### Transferring data by ownership\n\nWhen you are sending data between threads, the most efficient way is to transfer\nownership of the data. You can do that using the `ArrayBuffer` objects. This way\nthe data will not be copied, but instead the buffers will be truncated in the\ncurrent thread and become available in the new thread.\n\nTransfer buffers when executing a function:\n\n```ts\nmodule.exec('fn', params, [buffer1, buffer2, buffer3]);\n```\n\nTransfers buffers when writing to a channel from the main thread:\n\n```ts\nconst channel = module.ch('fn', params, [buffer1, buffer2, buffer3]);\n\nchannel.send(123, [buffer4, buffer5, buffer6]);\n```\n\nTransfer buffers when returning a value using the `msg` helper:\n\n```ts\nimport {msg} from 'node-multicore';\n\nexport const add = ([a: number, b: number]) =\u003e {\n  return msg(a + b, [buffer1, buffer2, buffer3]);\n};\n```\n\nTransfer buffers when writing to a channel from a worker thread:\n\n```ts\nexport const method = (params, send, recv) =\u003e {\n  send(123, [buffer1, buffer2, buffer3]);\n  send(456, [buffer4, buffer5, buffer6]);\n  return 123;\n};\n```\n\n\n## Multicore packages\n\nUse this shared thread pool to improve performance of compute intensive NPM\npackages. Say, there is a package `foo` which performs some heavy computations.\nCreate a new package `foo.multicore` and use this library to improve performance\nof the `foo` package.\n\n`module.ts`:\n\n```ts\nimport {foo as fooNative} from 'foo';\n\nexport const foo = (params) =\u003e fooNative(...params);\n```\n\n`index.ts`:\n\n```ts\nimport {pool} from 'node-multicore';\n\nconst typed = pool.module(__dirname + '/module').typed\u003ctypeof import('./module')\u003e();\n\nexport const foo = async (...params) =\u003e {\n  return await typed.call('foo', params);\n};\n```\n\n\n## Demo / Benchmark\n\nRun a demo with the following commands:\n\n```bash\nyarn\nyarn demo\n```\n\nSample output:\n\n```\nCPU = Apple M1, Cores = 8, Max threads = 7, Node = v18.15.0, Arch = arm64, OS = darwin\nWarmup ...\nThread pool: node-multicore (concurrency = 2): 5.280s\nThread pool: piscina (concurrency = 2): 5.214s\nThread pool: worker-nodes (concurrency = 2): 5.255s\nThread pool: node-multicore (concurrency = 4): 3.510s\nThread pool: piscina (concurrency = 4): 2.734s\nThread pool: worker-nodes (concurrency = 4): 2.747s\nThread pool: node-multicore (concurrency = 8): 2.598s\nThread pool: piscina (concurrency = 8): 2.178s\nThread pool: worker-nodes (concurrency = 8): 2.070s\nThread pool: node-multicore (concurrency = 16): 2.144s\nThread pool: piscina (concurrency = 16): 2.158s\nThread pool: worker-nodes (concurrency = 16): 2.045s\nThread pool: node-multicore (concurrency = 32): 1.919s\nThread pool: piscina (concurrency = 32): 2.153s\nThread pool: worker-nodes (concurrency = 32): 2.043s\nThread pool: node-multicore (concurrency = 64): 1.835s\nThread pool: piscina (concurrency = 64): 2.177s\nThread pool: worker-nodes (concurrency = 64): 2.044s\nThread pool: node-multicore (concurrency = 128): 1.843s\nThread pool: piscina (concurrency = 128): 2.145s\nThread pool: worker-nodes (concurrency = 128): 2.046s\nThread pool: node-multicore (concurrency = 256): 1.820s\nThread pool: piscina (concurrency = 256): 2.116s\nThread pool: worker-nodes (concurrency = 256): 2.020s\nThread pool: node-multicore (concurrency = 512): 1.797s\nThread pool: piscina (concurrency = 512): 2.088s\nThread pool: worker-nodes (concurrency = 512): 1.995s\nThread pool: node-multicore (concurrency = 1024): 1.787s\nThread pool: piscina (concurrency = 1024): 2.058s\nThread pool: worker-nodes (concurrency = 1024): 2.003s\nThread pool: node-multicore (concurrency = 1): 9.968s\nThread pool: piscina (concurrency = 1): 9.995s\nThread pool: worker-nodes (concurrency = 1): 10.043s\nOn main thread (concurrency = 1): 9.616s\nOn main thread (concurrency = 10): 9.489s\n```\n","project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fstreamich%2Fnode-multicore","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fstreamich%2Fnode-multicore","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fstreamich%2Fnode-multicore/lists"}