{"id":17749477,"url":"https://github.com/oresoftware/live-mutex","last_synced_at":"2025-04-13T08:40:34.586Z","repository":{"id":57134191,"uuid":"77249231","full_name":"ORESoftware/live-mutex","owner":"ORESoftware","description":" High-performance networked mutex for Node.js libraries.","archived":false,"fork":false,"pushed_at":"2020-07-27T05:57:46.000Z","size":800,"stargazers_count":139,"open_issues_count":16,"forks_count":5,"subscribers_count":8,"default_branch":"dev","last_synced_at":"2024-10-05T13:46:52.548Z","etag":null,"topics":["asynchronous","broker","lock","locks","mutex","networked","nodejs","redis","tcp-server","typescript"],"latest_commit_sha":null,"homepage":"","language":"TypeScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/ORESoftware.png","metadata":{"files":{"readme":"readme.md","changelog":"changelog.md","contributing":null,"funding":null,"license":"license.md","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2016-12-23T20:31:55.000Z","updated_at":"2024-10-01T21:04:35.000Z","dependencies_parsed_at":"2022-09-04T07:31:28.072Z","dependency_job_id":null,"html_url":"https://github.com/ORESoftware/live-mutex","commit_stats":null,"previous_names":[],"tags_count":1,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ORESoftware%2Flive-mutex","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ORESoftware%2Flive-mutex/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ORESoftware%2Flive-mutex/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ORESoftware%2Flive-mutex/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/ORESoftware","download_url":"https://codeload.github.com/ORESoftware/live-mutex/tar.gz/refs/heads/dev","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248686058,"owners_count":21145413,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["asynchronous","broker","lock","locks","mutex","networked","nodejs","redis","tcp-server","typescript"],"created_at":"2024-10-26T11:23:29.059Z","updated_at":"2025-04-13T08:40:34.550Z","avatar_url":"https://github.com/ORESoftware.png","language":"TypeScript","readme":"\n\n\u003ca align=\"right\" href=\"https://travis-ci.org/ORESoftware/live-mutex\"\u003e\n    \u003cimg align=\"right\" alt=\"Travis Build Status\" src=\"https://travis-ci.org/ORESoftware/live-mutex.svg?branch=dev\"\u003e\n\u003c/a\u003e\n\n\u003cbr\u003e\n\n\u003ca align=\"right\" href=\"https://circleci.com/gh/ORESoftware/live-mutex\"\u003e\n    \u003cimg align=\"right\" alt=\"CircleCI Build Status\" src=\"https://circleci.com/gh/ORESoftware/live-mutex.png?branch=dev\u0026circle-token=8ee83a1b06811c9a167e71d12b52f8cf7f786581\"\u003e\n\u003c/a\u003e\n\n\u003cbr\u003e\n\n\u003ca align=\"right\" href=\"https://www.npmjs.com/package/live-mutex\"\u003e\n\u003cimg align=\"right\" alt=\"Latest NPM version\" src=\"https://img.shields.io/npm/v/live-mutex.svg?colorB=green\"\u003e\n\u003c/a\u003e\n\n\u003cbr\u003e\n\n------------------\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"https://raw.githubusercontent.com/oresoftware/media/master/namespaces/live-mutex/lmx-logo.png?x=33\"\u003e\n\u003c/p\u003e\n\n-------------------\n\n# Live-Mutex / LMX  :lock: + :unlock:\n\n### Disclaimer\n\n\u003e\n\u003e Tested on *nix and MacOS - (probably will work on Windows, but not tested on Windows). \u003cbr\u003e\n\u003e Tested and proven on Node.js versions \u003e= 8.0.0.\n\u003e\n\n# Simple Working Examples:\n\n\u003e See: https://github.com/ORESoftware/live-mutex-examples\n\n\n# Installation\n\n\n##### \u003ci\u003e For usage with Node.js libraries: \u003c/i\u003e\n\n\u003e\n\u003e```$ npm i live-mutex```\n\u003e\n\u003e\n\n##### \u003ci\u003e For command line tools: \u003c/i\u003e\n \n\u003e\n\u003e```$ npm i -g live-mutex```\n\u003e\n\u003e\n\n##### \u003ci\u003e Docker image for the broker: \u003c/i\u003e\n\n\u003e\n\u003e```\n\u003e   docker pull 'oresoftware/live-mutex-broker:0.2.24'\n\u003e   docker run --rm -d -p 6970:6970 --name lmx-broker 'oresoftware/live-mutex-broker:0.2.24'  \n\u003e   docker logs -f lmx-broker\n\u003e```\n\n\u003cbr\u003e\n\n\n## About\n\n* Written in TypeScript for maintainability and ease of use.\n* Live-Mutex is a *non-distributed* networked mutex/semaphore for synchronization across multiple processes/threads.\n* Non-distributed means no failover if the broker goes down, but the upside is higher-performance.\n* By default, a binary semaphore, but can be used to create a non-binary semaphore, where multiple lockholders can hold a lock, for example, to do some form of rate limiting.\n* Live-Mutex can use either TCP or Unix Domain Sockets (UDS) to create an evented (non-polling) networked mutex API.\n* Live-Mutex is significantly (orders of magnitude) more performant than Lockfile and Warlock for high-concurrency locking requests.\n* When Warlock and Lockfile are not finely/expertly tuned, 5x more performant becomes more like 30x or 40x.\n* Live-Mutex should also be much less memory and CPU intensive than Lockfile and Warlock, because Live-Mutex is\nfully evented, and Lockfile and Warlock use a polling implementation by nature.\n\n\u003cbr\u003e\n\nThis library is ideal for use cases where a more robust \u003ci\u003edistributed\u003c/i\u003e locking mechanism is out-of-reach or otherwise inconvenient.\nYou can easily Dockerize the Live-Mutex broker using: https://github.com/ORESoftware/dockerize-lmx-broker\n\n\u003cbr\u003e\n\nOn a single machine, use Unix Domain Sockets for max performance. On a network, use TCP.\nTo use UDS, pass in \"udsPath\" to the client and broker constructors. Otherwise for TCP, pass a host/port combo to both.\n\n\u003cbr\u003e\n\n## Basic Metrics\nOn Linux/Ubuntu, if we feed live-mutex 10,000 lock requests, 20 concurrently, LMX can go through all 10,000 lock/unlock cycles\nin less than 2 seconds, which means at least 5 lock/unlock cycles per millisecond. That's with TCP. Using Unix Domain Sockets (for use on a single machine),\nLMX can reach at least 8.5 lock/unlock cycles per millisecond, about 30% more performant than TCP.\n\n\u003cbr\u003e\n\n## Rationale\nI used a couple of other libraries and they required manual retry logic and they used polling under the hood to acquire locks.\nIt was difficult to finetune those libraries and they were extremely slow for high lock request concurrency. \u003cbr\u003e\nOther libraries are stuck with polling for simple reasons - the filesystem is dumb, and so is Redis (unless you write some \u003cbr\u003e\nLua scripts that can run on there - I don't know of any libraries that do that).\n\n\u003cbr\u003e\n\nIf we create an intelligent broker that can enqueue locking requests, then we can create something that's both more performant and\nmore developer friendly. Enter live-mutex.\n\n\u003cb\u003e In more detail:\u003c/b\u003e\nSee: `docs/detailed-explanation.md` and `docs/about.md`\n\n\u003cbr\u003e\n\u003cbr\u003e\n\n\n# Simple Example\n\nLocking down a particular route in an Express server:\n\n```typescript\n\nimport {LMXClient} from 'live-mutex';\nconst client = new LMXClient();\nconst app = express();\n\napp.use((req,res,next) =\u003e {           \n   \n    if(req.url !== '/xyz'){\n      return next();\n    }\n   \n     // the lock will be automatically unlocked after 8 seconds\n    client.lock('foo', {ttl: 8000, retries: 2}, (err, unlock) =\u003e {\n    \n      if(err){\n        return next(err); \n      }\n\n      res.once('finish', () =\u003e {\n        unlock();\n      });\n      \n      next();\n\n    });\n\n});\n\n```\n\n\n# Basic Usage and Best Practices\n\nThe Live-Mutex API is completely asynchronous and requires usage of async initialization for both\nthe client and broker instances. It should be apparent by now that this library requires a Node.js process to run a server, and that server stores the locking info, as a single source of truth.\nThe broker can be within one of your existing Node.js processes, or more likely launched separately. In other words, a live-mutex client could also be the broker,\nthere is nothing wrong with that. For any given key there should be only one broker. For absolute speed, you could use separate\nbrokers (in separate Node.js processes) for separate keys, but that's not really very necessary.\nUnix Domain Sockets are about 10-50% faster than TCP, depending on how well-tuned TCP is on your system.\n\n\u003cb\u003e Things to keep in mind: \u003c/b\u003e\n\n1. You need to initialize a broker before connecting any clients, otherwise your clients will pass back an error upon calling `connect()`.\n2. You need to call `ensure()/connect()` on a client or use the asynchronous callback passed to the constructor, before calling `client.lock()` or `client.unlock()`.\n3. Live-Mutex clients and brokers are *not* event emitters. \u003cbr\u003e The two classes wrap Node.js sockets, but the socket connections are not exposed.\n4. To use TCP and host/port use `{port: \u003cnumber\u003e, host: \u003cstring\u003e}`, to use Unix Domain Sockets, use `{udsPath: \u003cabsoluteFilePath\u003e}`.\n5. If there is an error or Promise rejection, the lock was not acquired, otherwise the lock was acquired.\n   This is nicer than other libraries that ask that you check the type of the second argument, instead of just checking\n   for the presence of an error.\n6. The same process that is a client can also be a broker. Live-Mutex is designed for this.\n   You probably only need one broker for any given host, and probably only need one broker if you use multiple keys,\n   but you can always use more than one broker per host, and use different ports. Obviously, it would not work\n   to use multiple brokers for the same key, that is the one thing you should not do.\n\n\n\u003cbr\u003e\n\n# Client Examples\n\n## Using shell / command line:\n\n\u003cdetails\u003e\n\u003csummary\u003eExample\u003c/summary\u003e\n\n(First, make sure you install the library as a global package with NPM). \nThe real power of this library comes with usage with Node.js, but we can use this functionality at the command line too:\n\n```bash\n\n#  in shell 1, we launch a live-mutex server/broker\n$ lmx start            # 6970 is the default port\n\n\n#  in shell 2, we acquire/release locks on key \"foo\"\n$ lmx acquire foo      # 6970 is the default port\n$ lmx release foo      # 6970 is the default port\n\n```\n\nTo set a port / host / uds-path in the current shell, use\n\n```bash\n$ lmx set host localhost\n$ lmx set port 6982\n$ lmx set uds_path \"$PWD/zoom\"\n```\n\nIf `uds_path` is set, it will override host/port. You must use `$ lmx set a b`, to change settings. You can elect to use these environment variables\nin Node.js, by using `{env: true}` in your Node.js code.\n\n\u003c/details\u003e\n\n\u003cbr\u003e\n\n# Using Node.js\n\n## Importing the library using Node.js\n\n```js\n// alternatively you can import all of these directly\nimport {Client, Broker} from 'live-mutex';\n\n// aliases of the above;\nimport {LMXClient, LMXBroker} from 'live-mutex';\n```\n\n\u003cbr\u003e\n\n# Simple example\n\nTo see a *complete* and *simple* example of using a broker and client in the same process, see: `=\u003e docs/examples/simple.md`\n\n\u003cbr\u003e\n\n### A note on default behavior\n\nBy default, a lock request will retry 3 times, on an interval defined by `opts.lockRequestTimeout`, which defaults to 3 seconds.\nThat would mean that the a lock request may fail with a timeout error after 9 seconds. To change the number of retries:\nto use zero retries, use either `{retry: false}` or `{maxRetries: 0}`.\n\nThere is a built-in retry mechanism for locking requests. On the other hand for unlock requests - there is no built-in retry functionality.\nIf you absolutely need an unlock request to succeed, use `opts.force = true`. Otherwise, implement your own retry mechanism for unlocking. If you want the library\nto implement automatic retries for unlocking, please file an ticket.\n\nAs explained in a later section, by default this library uses \u003ci\u003ebinary semaphores\u003c/i\u003e, which means only one lockholder per key at a time.\nIf you want more than one lockholder to be able hold the lock for a certain key at time, use `{max:x}` where x is an integer greater than 1.\n\n\u003cbr\u003e\n\n\n### Using the library with Promises (recommended usage)\n\n\u003cdetails\u003e\n\n \u003csummary\u003eExample\u003c/summary\u003e\n \n ```js\n const opts = {port: '\u003cport\u003e' , host: '\u003chost\u003e'};\n // check to see if the websocket broker is already running, if not, launch one in this process\n \n const client = new Client(opts);\n \n // calling ensure before each critical section means that we ensure we have a connected client\n // for shorter lived applications, calling ensure more than once is not as important\n \n return client.ensure().then(c =\u003e  {   // (c is the same object as client)\n  return c.acquire('\u003ckey\u003e').then(({key,id}) =\u003e {\n     return c.release('\u003ckey\u003e', id);\n  });\n });\n \n ```\n\n\u003c/details\u003e\n\n\n\n### Using async/await\n\n\u003cdetails\u003e\n\u003csummary\u003eExample\u003c/summary\u003e\n\n```typescript\n    const times = 10000;\n    const start = Date.now();\n    \n    async.timesLimit(times, 25, async n =\u003e {\n      \n      const {id, key} = await c.acquire('foo');\n      // do your thing here\n      return await c.release(key, id);  // or just return w/o await, since await is redundant in the return statement\n      \n    }, err =\u003e {\n      \n      if (err) {\n        throw err;\n      }\n      \n      const diff = Date.now() - start;\n      console.log('Time required for live-mutex:', diff);\n      console.log('Lock/unlock cycles per millisecond:', Number(times / diff).toFixed(3));\n      process.exit(0);\n      \n    });\n\n```\n\n\u003c/details\u003e\n\n\n\u003cbr\u003e\n\n#### Using vanilla callbacks (higher performance + easy to use convenience unlock function)\n\n```js\nclient.ensure(err =\u003e {\n   client.lock('\u003ckey\u003e', (err, unlock) =\u003e {\n       unlock(err =\u003e {  // unlock is a convenience function, bound to the correct key + request uuid\n\n       });\n   });\n});\n```\n\n\u003cbr\u003e\n\n#### If you want the key and request id, use:\n\n```js\nclient.ensure(err =\u003e {\n   client.lock('\u003ckey\u003e', (err, {id, key}) =\u003e {\n       client.unlock(key, id, err =\u003e {\n\n           // note that if we don't use the unlock convenience callback,\n           // that we should definitely pass the id of the original request.\n           // this is for safety - we only want to unlock the corresponding lock,\n           // which is defined not just by the right key, but also the right request id.\n\n       });\n   });\n});\n```\n\n\u003cb\u003enote:\u003c/b\u003e using the id ensures that the unlock call corresponds with the original corresponding lock call otherwise what could happen in your program is that you could call\nunlock() for a key/id that was not supposed to be unlocked by your current call.\n\n\u003cbr\u003e\n\n\n### Using the unlock convenience callback with promises:\n\nWe use a utility method on Client to promisify and run the unlock convenience callback.\n\n```js\n return client.ensure().then(c =\u003e  {   // (c is the same object as client)\n    return c.acquire('\u003ckey\u003e').then(unlock =\u003e {\n        return c.execUnlock(unlock);\n     });\n });\n\n```\n\nAs you can see, before any `client.lock()` call, we call `client.ensure()`...this is not imperative, but it is a best practice. \u003cbr\u003e\n`client.ensure()` only needs to be called once before any subsequent `client.lock()` call. However, the benefit of calling it before every time,\nis that it will allow a new connection to be made if the existing one has a bad state.\n\nAny *locking* errors will mostly be due to the failure to acquire a lock before timing out, and should\n very rarely happen if you understand your system and provide good settings/options to live-mutex.\n\n*Unlocking* errors should be very rare, and most likely will happen if the process running the broker goes down\nor is overwhelmed. You can simply log unlocking errors, and otherwise ignore them.\n\n\u003cbr\u003e\n\n## You must use the lock id, or {force:true} to reliably unlock\n\nYou must either pass the lock id, or use force, to unlock a lock:\n\n\u003cb\u003e works:\u003c/b\u003e\n\n```js\n return client.ensure().then(c =\u003e  {   // (c is the same object as client)\n    return c.acquire('\u003ckey\u003e').then(({key,id}) =\u003e {\n        return c.release(key, id);\n     });\n });\n```\n\n\u003cb\u003e works:\u003c/b\u003e\n\n```js\n return client.ensure().then(c =\u003e  {   // (c is the same object as client)\n    return c.acquire('\u003ckey\u003e').then(({key,id}) =\u003e {\n        return c.release(key, {force:true});\n     });\n });\n```\n\n\u003cb\u003e will not work:\u003c/b\u003e\n\n```js\n return client.ensure().then(c =\u003e  {   // (c is the same object as client)\n    return c.acquire('\u003ckey\u003e').then(({key,id}) =\u003e {\n        return c.release(key);\n     });\n });\n```\n\n\u003ci\u003e If it's not clear, the lock id is the id of the lock, which is unique for each and every critical section.\u003c/i\u003e\n\nAlthough using the lock id is preferred, `{force:true}` is acceptable, and imperative if you need to unlock from a different process,\nwhere you won't easily have access to the lock id from another process.\n\n\u003cbr\u003e\n\n## Client constructor and client.lock() method options\n\n\u003cdetails\u003e\n\u003csummary\u003elock() method options\u003c/summary\u003e\nThere are some important options. \nMost options can be passed to the client constructor instead of the client lock method, which is more convenient and performant:\n\n```js\nconst c = new Client({port: 3999, ttl: 11000, lockRequestTimeout: 2000, maxRetries: 5});\n\nc.ensure().then(c =\u003e {\n    // lock will retry a maximum of 5 times, with 2 seconds between each retry\n   return c.acquire(key);\n})\n.then(({key, id, unlock}) =\u003e {\n\n   // we have acquired a lock on the key, if we don't release the lock after 11 seconds\n   // it will be unlocked for us.\n\n   // note that if we want to use the unlock convenience function, it's available here\n\n   // runUnlock/execUnlock will return a promise, and execute the unlock convenience function for us\n   return c.execUnlock(unlock);\n});\n```\n\u003c/details\u003e\n\n\u003cbr\u003e\n\n## The current default values for constructor options:\n\n* `env` =\u003e `false`, if you set `env` to true, then Node.js lib will default to settings set from process.env (when you called: `$ lmx set port 5000`);\n* `port` =\u003e `6970`\n* `host` =\u003e `localhost`\n* `ttl` =\u003e `4000`ms. If 4000ms elapses, if the lock still exists, the lock will be automatically released by the broker.\n* `maxRetries` =\u003e `3`. A lock request will be sent to the broker 3 times before an error is called back.\n* `lockRequestTimeout` =\u003e `3000`ms. For each lock request, it will timeout after 3 seconds. Upon timeout, it will retry until maxRetries is reached.\n* `keepLocksOnExit` =\u003e `false`. If true, locks will *not* be deleted if a connection is closed.\n* `noDelay` =\u003e true. By default true, if true, will use the TCP_NODELAY setting (this option is for both broker constructor and client constructor).\n\nAs already stated, unless you are using different options for different lock requests for the same client, \u003cbr\u003e\nsimply pass these options to the client constructor which allows you to avoid passing an options object for each \u003cbr\u003e\nclient.lock/unlock call.\n\n\n\u003cbr\u003e\n\n## Usage with Promises and RxJS5 Observables:\n  \n  This library conciously uses a CPS interface as this is the most primitive and performant async interface.\n  You can always wrap client.lock and client.unlock to use Promises or Observables etc.\n  In the docs directory, I've demonstrated how to use live-mutex with ES6 Promises and RxJS5 Observables.\n  Releasing the lock can be implemented with (1) the unlock() convenience callback or with (2) both\n  the lockName and the uuid of the lock request.\n  \n  With regard to the Observables implementation, notice that we just pass errors to sub.next() instead of sub.error(),\n  but that's just a design decision.\n\n\n### Usage with Promises:\n\u003e see: `docs/examples/promises.md`\n\n\n### Usage with RxJS5 Observables\n\u003e see: `docs/examples/observables.md`\n\n\n\u003cbr\u003e\n\n\n## Non-binary mutex/semaphore\n\nBy default, only one lockholder can hold a lock at any moment, and that means `{max:1}`.\nTo change a particular key to allow more than one lockholder, use {max:x}, like so:\n\n```js\n\nc.lock('\u003ckey\u003e', {max:12}, (err,val) =\u003e {\n   // using the max option like so, now as many as 12 lockholders can hold the lock for key '\u003ckey\u003e'\n});\n\n```\n\nNon-binary semaphores are well-supported by live-mutex and are a primary feature.\n\n\u003cbr\u003e\n\n## Live-Mutex utils\n\n\u003cdetails\u003e\n\u003csummary\u003eUse lmx utils to your advantage\u003c/summary\u003e\nTo launch a broker process using Node.js:\n\n```js\n\nconst {lmUtils} = require('live-mutex');\n\nlmUtils.conditionallyLaunchSocketServer(opts, function(err){\n\n    if(err) throw err;\n\n      // either this process now owns the broker, or it's already running in a different process\n      // either way, we are good to go\n      // you don't need to use this utility method, you can easily write your own\n\n      // * the following is our recommended usage* =\u003e\n      // for convenience and safety, you can use the unlock callback, which is bound\n      // to the right key and internal call-id\n\n  });\n\n```\n\nTo see examples of launching a broker using Node.js code, see:\n\n```src/lm-start-server.ts```\n\n\nTo check if there is already a broker running in your system on the desired port, you can use a tcp ping utility\nto see if the web-socket server is running somewhere. I have had a lot of luck with tcp-ping, like so:\n\n```js\n\nconst ping = require('tcp-ping');\n\nping.probe(host, port, function (err, available) {\n\n    if (err) {\n        // handle it\n    }\n    else if (available) {\n        // tcp server is already listening on the given host/port\n    }\n    else {\n       // nothing is listening so you should launch a new server/broker, as stated above\n       // the broker can run in the same process as a client, or a separate process, either way\n    }\n});\n\n\n```\n\n\u003c/details\u003e\n\n\n\u003cbr\u003e\n\n### Live-Mutex supports Node.js-core domains\nTo see more, see: `docs/examples/domains.md`\n\n\u003cbr\u003e\n\n## Creating a simple client pool\n\n\u003cdetails\u003e\n\n\u003csummary\u003eExample\u003c/summary\u003e\nIn most cases, a single client is sufficient, and this is true many types of networked clients using async I/O.\nYou almost certainly do not need more than one client.\nHowever, if you do some empirical test, and find a client pool might be beneficial/faster, etc. Try this:\n\n```js\nconst {Client} = require('live-mutex');\n\nexports.createPool = function(opts){\n  \n  return Promise.all([\n     new Client(opts).connect(),\n     new Client(opts).connect(),\n     new Client(opts).connect(),\n     new Client(opts).connect()\n  ]);\n}\n```\n\n\u003c/details\u003e\n\n\n### User notes\n\n* if the major or minor version differs between client and broker, an error will be thrown in the client process.\n\n\n### Testing\n\n\u003e Look at test/readme.md\n\n\n\u003cbr\u003e\n\n\n## Using Docker + Unix Domain Sockets\n\nIn short, you almost certainly can't do this, because apparently sockets cannot be shared between host and container.\nMeaning if your client is running on the host machine (or other container), but your broker is running in a container,\nit will likely not be possible, but that's ok, since you can just use TCP/ports.\n \n\u003cdetails\u003e\n \u003csummary\u003eExample of an attempt\u003c/summary\u003e\n \n You almost certainly don't want to do this, as using UDS is for one machine only, and this technique only\n works on Linux it does not work on MacOS (sharing sockets between host and container).\n \n When running on a single machine, here's how you do use UDS with Docker:\n \n ```bash\n my_sock=\"$(pwd)/foo/uds.sock\";\n rm -f \"$my_sock\"\n docker run -d -v \"$(pwd)/foo\":/uds 'oresoftware/live-mutex-broker:latest' --use-uds\n \n ```\n \n The above passed the `--use-uds` boolean flag to the launch process, which tells the broker to use UDS instead of listening on a port.\n The -v option allows the host and container to share a portion of the filesystem. You should probably just delete the socket file\n before starting the container, in case the file already exists.  '/uds/uds.sock' is the path in the container that points to the socket file,\n it's a hardcoded fixed path.\n \n When connecting to the broker with the Node.js client, you would use:\n \n ```typescript\n  const client = new Client({udsPath: 'foo/uds.sock'});\n ```\n\n\u003c/details\u003e\n\n\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Foresoftware%2Flive-mutex","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Foresoftware%2Flive-mutex","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Foresoftware%2Flive-mutex/lists"}