{"id":13473152,"url":"https://github.com/brainix/pottery","last_synced_at":"2025-05-14T00:04:53.227Z","repository":{"id":33536200,"uuid":"37182318","full_name":"brainix/pottery","owner":"brainix","description":"Redis for humans. 🌎🌍🌏","archived":false,"fork":false,"pushed_at":"2025-05-04T09:54:17.000Z","size":1110,"stargazers_count":1148,"open_issues_count":19,"forks_count":62,"subscribers_count":18,"default_branch":"master","last_synced_at":"2025-05-04T10:34:57.419Z","etag":null,"topics":["asyncio","bloom-filter","cache","dict","distributed","distributed-lock","distributed-locks","forhumans","library","lock","no-sql","python","redis","redis-cache","redis-client","resillience"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/brainix.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2015-06-10T07:30:43.000Z","updated_at":"2025-05-04T09:54:19.000Z","dependencies_parsed_at":"2023-01-15T01:20:16.108Z","dependency_job_id":"952de60a-80bf-44c8-bdc6-969a505d142f","html_url":"https://github.com/brainix/pottery","commit_stats":{"total_commits":684,"total_committers":4,"mean_commits":171.0,"dds":0.004385964912280715,"last_synced_commit":"c7be6f1f25c5404a460b676cc60d4e6a931f8ee7"},"previous_names":[],"tags_count":52,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/brainix%2Fpottery","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/brainix%2Fpottery/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/brainix%2Fpottery/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/brainix%2Fpottery/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/brainix","download_url":"https://codeload.github.com/brainix/pottery/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254043314,"owners_count":22004925,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["asyncio","bloom-filter","cache","dict","distributed","distributed-lock","distributed-locks","forhumans","library","lock","no-sql","python","redis","redis-cache","redis-client","resillience"],"created_at":"2024-07-31T16:01:01.229Z","updated_at":"2025-05-14T00:04:53.207Z","avatar_url":"https://github.com/brainix.png","language":"Python","readme":"# Pottery: Redis for Humans 🌎🌍🌏\n\n[Redis](http://redis.io/) is awesome, but [Redis\ncommands](http://redis.io/commands) are not always intuitive.  Pottery is a\nPythonic way to access Redis.  If you know how to use Python dicts, then you\nalready know how to use Pottery.  Pottery is useful for accessing Redis more\neasily, and also for implementing microservice resilience patterns; and it has\nbeen battle tested in production at scale.\n\n[![Build status](https://img.shields.io/github/actions/workflow/status/brainix/pottery/python-package.yml?branch=master)](https://github.com/brainix/pottery/actions?query=branch%3Amaster)\n[![Security status](https://img.shields.io/badge/security-bandit-dark.svg)](https://github.com/PyCQA/bandit)\n[![Latest released version](https://badge.fury.io/py/pottery.svg)](https://badge.fury.io/py/pottery)\n\n![Supported Python versions](https://img.shields.io/pypi/pyversions/pottery)\n\n[![Total number of downloads](https://pepy.tech/badge/pottery)](https://pepy.tech/project/pottery)\n[![Downloads per month](https://pepy.tech/badge/pottery/month)](https://pepy.tech/project/pottery)\n[![Downloads per week](https://pepy.tech/badge/pottery/week)](https://pepy.tech/project/pottery)\n\n\n\n## Table of Contents\n- [Dicts 📖](#dicts)\n- [Sets 🛍️](#sets)\n- [Lists ⛓](#lists)\n- [Counters 🧮](#counters)\n- [Deques 🖇️](#deques)\n- [Queues 🚶‍♂️🚶‍♀️🚶‍♂️](#queues)\n- [Redlock 🔒](#redlock)\n    - [synchronize() 👯‍♀️](#synchronize)\n- [AIORedlock 🔒](#aioredlock)\n- [NextID 🔢](#nextid)\n- [redis_cache()](#redis_cache)\n- [CachedOrderedDict](#cachedordereddict)\n- [Bloom filters 🌸](#bloom-filters)\n- [HyperLogLogs 🪵](#hyperloglogs)\n- [ContextTimer ⏱️](#contexttimer)\n\n\n\n## Installation\n\n```shell\n$ pip3 install pottery\n```\n\n## Usage\n\nFirst, set up your Redis client:\n\n```python\n\u003e\u003e\u003e from redis import Redis\n\u003e\u003e\u003e redis = Redis.from_url('redis://localhost:6379/1')\n\u003e\u003e\u003e\n```\n\n\n\n## \u003ca name=\"dicts\"\u003e\u003c/a\u003eDicts 📖\n\n`RedisDict` is a Redis-backed container compatible with Python\u0026rsquo;s\n[`dict`](https://docs.python.org/3/tutorial/datastructures.html#dictionaries).\n\nHere is a small example using a `RedisDict`:\n\n```python\n\u003e\u003e\u003e from pottery import RedisDict\n\u003e\u003e\u003e tel = RedisDict({'jack': 4098, 'sape': 4139}, redis=redis, key='tel')\n\u003e\u003e\u003e tel['guido'] = 4127\n\u003e\u003e\u003e tel\nRedisDict{'jack': 4098, 'sape': 4139, 'guido': 4127}\n\u003e\u003e\u003e tel['jack']\n4098\n\u003e\u003e\u003e del tel['sape']\n\u003e\u003e\u003e tel['irv'] = 4127\n\u003e\u003e\u003e tel\nRedisDict{'jack': 4098, 'guido': 4127, 'irv': 4127}\n\u003e\u003e\u003e list(tel)\n['jack', 'guido', 'irv']\n\u003e\u003e\u003e sorted(tel)\n['guido', 'irv', 'jack']\n\u003e\u003e\u003e 'guido' in tel\nTrue\n\u003e\u003e\u003e 'jack' not in tel\nFalse\n\u003e\u003e\u003e\n```\n\nNotice the first two keyword arguments to `RedisDict()`:  The first is your\nRedis client.  The second is the Redis key name for your dict.  Other than\nthat, you can use your `RedisDict` the same way that you use any other Python\n`dict`.\n\n*Limitations:*\n\n1. Keys and values must be JSON serializable.\n\n\n\n## \u003ca name=\"sets\"\u003e\u003c/a\u003eSets 🛍️\n\n`RedisSet` is a Redis-backed container compatible with Python\u0026rsquo;s\n[`set`](https://docs.python.org/3/tutorial/datastructures.html#sets).\n\nHere is a brief demonstration:\n\n```python\n\u003e\u003e\u003e from pottery import RedisSet\n\u003e\u003e\u003e basket = RedisSet({'apple', 'orange', 'apple', 'pear', 'orange', 'banana'}, redis=redis, key='basket')\n\u003e\u003e\u003e sorted(basket)\n['apple', 'banana', 'orange', 'pear']\n\u003e\u003e\u003e 'orange' in basket\nTrue\n\u003e\u003e\u003e 'crabgrass' in basket\nFalse\n\n\u003e\u003e\u003e a = RedisSet('abracadabra', redis=redis, key='magic')\n\u003e\u003e\u003e b = set('alacazam')\n\u003e\u003e\u003e sorted(a)\n['a', 'b', 'c', 'd', 'r']\n\u003e\u003e\u003e sorted(a - b)\n['b', 'd', 'r']\n\u003e\u003e\u003e sorted(a | b)\n['a', 'b', 'c', 'd', 'l', 'm', 'r', 'z']\n\u003e\u003e\u003e sorted(a \u0026 b)\n['a', 'c']\n\u003e\u003e\u003e sorted(a ^ b)\n['b', 'd', 'l', 'm', 'r', 'z']\n\u003e\u003e\u003e\n```\n\nNotice the two keyword arguments to `RedisSet()`:  The first is your Redis\nclient.  The second is the Redis key name for your set.  Other than that, you\ncan use your `RedisSet` the same way that you use any other Python `set`.\n\nDo more efficient membership testing for multiple elements using\n`.contains_many()`:\n\n```python\n\u003e\u003e\u003e nirvana = RedisSet({'kurt', 'krist', 'dave'}, redis=redis, key='nirvana')\n\u003e\u003e\u003e tuple(nirvana.contains_many('kurt', 'krist', 'chat', 'dave'))\n(True, True, False, True)\n\u003e\u003e\u003e\n```\n\n*Limitations:*\n\n1. Elements must be JSON serializable.\n\n\n\n## \u003ca name=\"lists\"\u003e\u003c/a\u003eLists ⛓\n\n`RedisList` is a Redis-backed container compatible with Python\u0026rsquo;s\n[`list`](https://docs.python.org/3/tutorial/introduction.html#lists).\n\n```python\n\u003e\u003e\u003e from pottery import RedisList\n\u003e\u003e\u003e squares = RedisList([1, 4, 9, 16, 25], redis=redis, key='squares')\n\u003e\u003e\u003e squares\nRedisList[1, 4, 9, 16, 25]\n\u003e\u003e\u003e squares[0]\n1\n\u003e\u003e\u003e squares[-1]\n25\n\u003e\u003e\u003e squares[-3:]\n[9, 16, 25]\n\u003e\u003e\u003e squares[:]\n[1, 4, 9, 16, 25]\n\u003e\u003e\u003e squares + [36, 49, 64, 81, 100]\nRedisList[1, 4, 9, 16, 25, 36, 49, 64, 81, 100]\n\u003e\u003e\u003e\n```\n\nNotice the two keyword arguments to `RedisList()`:  The first is your Redis\nclient.  The second is the Redis key name for your list.  Other than that, you\ncan use your `RedisList` the same way that you use any other Python `list`.\n\n*Limitations:*\n\n1. Elements must be JSON serializable.\n2. Under the hood, Python implements `list` using an array.  Redis implements\n   list using a\n   [doubly linked list](https://redis.io/topics/data-types-intro#redis-lists).\n   As such, inserting elements at the head or tail of a `RedisList` is fast,\n   O(1).  However, accessing `RedisList` elements by index is slow, O(n).  So\n   in terms of performance and ideal use cases, `RedisList` is more similar to\n   Python\u0026rsquo;s `deque` than Python\u0026rsquo;s `list`.  Instead of `RedisList`,\n   consider using [`RedisDeque`](#deques).\n\n\n\n## \u003ca name=\"counters\"\u003e\u003c/a\u003eCounters 🧮\n\n`RedisCounter` is a Redis-backed container compatible with Python\u0026rsquo;s\n[`collections.Counter`](https://docs.python.org/3/library/collections.html#collections.Counter).\n\n```python\n\u003e\u003e\u003e from pottery import RedisCounter\n\u003e\u003e\u003e c = RedisCounter(redis=redis, key='my-counter')\n\u003e\u003e\u003e c = RedisCounter('gallahad', redis=redis, key='my-counter')\n\u003e\u003e\u003e c.clear()\n\u003e\u003e\u003e c = RedisCounter({'red': 4, 'blue': 2}, redis=redis, key='my-counter')\n\u003e\u003e\u003e c.clear()\n\u003e\u003e\u003e c = RedisCounter(redis=redis, key='my-counter', cats=4, dogs=8)\n\u003e\u003e\u003e c.clear()\n\n\u003e\u003e\u003e c = RedisCounter(['eggs', 'ham'], redis=redis, key='my-counter')\n\u003e\u003e\u003e c['bacon']\n0\n\u003e\u003e\u003e c['sausage'] = 0\n\u003e\u003e\u003e del c['sausage']\n\u003e\u003e\u003e c.clear()\n\n\u003e\u003e\u003e c = RedisCounter(redis=redis, key='my-counter', a=4, b=2, c=0, d=-2)\n\u003e\u003e\u003e sorted(c.elements())\n['a', 'a', 'a', 'a', 'b', 'b']\n\u003e\u003e\u003e c.clear()\n\n\u003e\u003e\u003e RedisCounter('abracadabra', redis=redis, key='my-counter').most_common(3)\n[('a', 5), ('b', 2), ('r', 2)]\n\u003e\u003e\u003e c.clear()\n\n\u003e\u003e\u003e c = RedisCounter(redis=redis, key='my-counter', a=4, b=2, c=0, d=-2)\n\u003e\u003e\u003e from collections import Counter\n\u003e\u003e\u003e d = Counter(a=1, b=2, c=3, d=4)\n\u003e\u003e\u003e c.subtract(d)\n\u003e\u003e\u003e c\nRedisCounter{'a': 3, 'b': 0, 'c': -3, 'd': -6}\n\u003e\u003e\u003e\n```\n\nNotice the first two keyword arguments to `RedisCounter()`:  The first is your\nRedis client.  The second is the Redis key name for your counter.  Other than\nthat, you can use your `RedisCounter` the same way that you use any other\nPython `Counter`.\n\n*Limitations:*\n\n1. Keys must be JSON serializable.\n\n\n\n## \u003ca name=\"deques\"\u003e\u003c/a\u003eDeques 🖇️\n\n`RedisDeque` is a Redis-backed container compatible with Python\u0026rsquo;s\n[`collections.deque`](https://docs.python.org/3/library/collections.html#collections.deque).\n\nExample:\n\n```python\n\u003e\u003e\u003e from pottery import RedisDeque\n\u003e\u003e\u003e d = RedisDeque('ghi', redis=redis, key='letters')\n\u003e\u003e\u003e for elem in d:\n...     print(elem.upper())\nG\nH\nI\n\n\u003e\u003e\u003e d.append('j')\n\u003e\u003e\u003e d.appendleft('f')\n\u003e\u003e\u003e d\nRedisDeque(['f', 'g', 'h', 'i', 'j'])\n\n\u003e\u003e\u003e d.pop()\n'j'\n\u003e\u003e\u003e d.popleft()\n'f'\n\u003e\u003e\u003e list(d)\n['g', 'h', 'i']\n\u003e\u003e\u003e d[0]\n'g'\n\u003e\u003e\u003e d[-1]\n'i'\n\n\u003e\u003e\u003e list(reversed(d))\n['i', 'h', 'g']\n\u003e\u003e\u003e 'h' in d\nTrue\n\u003e\u003e\u003e d.extend('jkl')\n\u003e\u003e\u003e d\nRedisDeque(['g', 'h', 'i', 'j', 'k', 'l'])\n\u003e\u003e\u003e d.rotate(1)\n\u003e\u003e\u003e d\nRedisDeque(['l', 'g', 'h', 'i', 'j', 'k'])\n\u003e\u003e\u003e d.rotate(-1)\n\u003e\u003e\u003e d\nRedisDeque(['g', 'h', 'i', 'j', 'k', 'l'])\n\n\u003e\u003e\u003e RedisDeque(reversed(d), redis=redis)\nRedisDeque(['l', 'k', 'j', 'i', 'h', 'g'])\n\u003e\u003e\u003e d.clear()\n\n\u003e\u003e\u003e d.extendleft('abc')\n\u003e\u003e\u003e d\nRedisDeque(['c', 'b', 'a'])\n\u003e\u003e\u003e\n```\n\nNotice the two keyword arguments to `RedisDeque()`:  The first is your Redis\nclient.  The second is the Redis key name for your deque.  Other than that, you\ncan use your `RedisDeque` the same way that you use any other Python `deque`.\n\n*Limitations:*\n\n1. Elements must be JSON serializable.\n\n\n\n## \u003ca name=\"queues\"\u003e\u003c/a\u003eQueues 🚶‍♂️🚶‍♀️🚶‍♂️\n\n`RedisSimpleQueue` is a Redis-backed multi-producer, multi-consumer FIFO queue\ncompatible with Python\u0026rsquo;s\n[`queue.SimpleQueue`](https://docs.python.org/3/library/queue.html#simplequeue-objects).\nIn general, use a Python `queue.Queue` if you\u0026rsquo;re using it in one or more\nthreads, use `multiprocessing.Queue` if you\u0026rsquo;re using it between processes,\nand use `RedisSimpleQueue` if you\u0026rsquo;re sharing it across machines or if you\nneed for your queue to persist across application crashes or restarts.\n\nInstantiate a `RedisSimpleQueue`:\n\n```python\n\u003e\u003e\u003e from pottery import RedisSimpleQueue\n\u003e\u003e\u003e cars = RedisSimpleQueue(redis=redis, key='cars')\n\u003e\u003e\u003e\n```\n\nNotice the two keyword arguments to `RedisSimpleQueue()`:  The first is your\nRedis client.  The second is the Redis key name for your queue.  Other than\nthat, you can use your `RedisSimpleQueue` the same way that you use any other\nPython `queue.SimpleQueue`.\n\nCheck the queue state, put some items in the queue, and get those items back\nout:\n\n```python\n\u003e\u003e\u003e cars.empty()\nTrue\n\u003e\u003e\u003e cars.qsize()\n0\n\u003e\u003e\u003e cars.put('Jeep')\n\u003e\u003e\u003e cars.put('Honda')\n\u003e\u003e\u003e cars.put('Audi')\n\u003e\u003e\u003e cars.empty()\nFalse\n\u003e\u003e\u003e cars.qsize()\n3\n\u003e\u003e\u003e cars.get()\n'Jeep'\n\u003e\u003e\u003e cars.get()\n'Honda'\n\u003e\u003e\u003e cars.get()\n'Audi'\n\u003e\u003e\u003e cars.empty()\nTrue\n\u003e\u003e\u003e cars.qsize()\n0\n\u003e\u003e\u003e\n```\n\n*Limitations:*\n\n1. Items must be JSON serializable.\n\n\n\n## \u003ca name=\"redlock\"\u003e\u003c/a\u003eRedlock 🔒\n\n`Redlock` is a safe and reliable lock to coordinate access to a resource shared\nacross threads, processes, and even machines, without a single point of\nfailure.  [Rationale and algorithm\ndescription.](http://redis.io/topics/distlock)\n\n`Redlock` implements Python\u0026rsquo;s excellent\n[`threading.Lock`](https://docs.python.org/3/library/threading.html#lock-objects)\nAPI as closely as is feasible.  In other words, you can use `Redlock` the same\nway that you use `threading.Lock`.  The main reason to use `Redlock` over\n`threading.Lock` is that `Redlock` can coordinate access to a resource shared\nacross different machines; `threading.Lock` can\u0026rsquo;t.\n\nInstantiate a `Redlock`:\n\n```python\n\u003e\u003e\u003e from pottery import Redlock\n\u003e\u003e\u003e printer_lock = Redlock(key='printer', masters={redis}, auto_release_time=.2)\n\u003e\u003e\u003e\n```\n\nThe `key` argument represents the resource, and the `masters` argument\nspecifies your Redis masters across which to distribute the lock.  In\nproduction, you should have 5 Redis masters.  This is to eliminate a single\npoint of failure \u0026mdash; you can lose up to 2 out of the 5 Redis masters and\nyour `Redlock` will remain available and performant.  Now you can protect\naccess to your resource:\n\n```python\n\u003e\u003e\u003e if printer_lock.acquire():\n...     # Critical section - print stuff here.\n...     print('printer_lock is locked')\n...     printer_lock.release()\nprinter_lock is locked\n\u003e\u003e\u003e bool(printer_lock.locked())\nFalse\n\u003e\u003e\u003e\n```\n\nOr you can protect access to your resource inside a context manager:\n\n```python\n\u003e\u003e\u003e with printer_lock:\n...     # Critical section - print stuff here.\n...     print('printer_lock is locked')\nprinter_lock is locked\n\u003e\u003e\u003e bool(printer_lock.locked())\nFalse\n\u003e\u003e\u003e\n```\n\nIt\u0026rsquo;s safest to instantiate a new `Redlock` object every time you need to\nprotect your resource and to not share `Redlock` instances across different\nparts of code.  In other words, think of the `key` as identifying the resource;\ndon\u0026rsquo;t think of any particular `Redlock` as identifying the resource.\nInstantiating a new `Redlock` every time you need a lock sidesteps bugs by\ndecoupling how you use `Redlock` from the forking/threading model of your\napplication/service.\n\n`Redlock`s are automatically released (by default, after 10 seconds).  You\nshould take care to ensure that your critical section completes well within\nthat timeout.  The reasons that `Redlock`s are automatically released are to\npreserve\n[\u0026ldquo;liveness\u0026rdquo;](http://redis.io/topics/distlock#liveness-arguments)\nand to avoid deadlocks (in the event that a process dies inside a critical\nsection before it releases its lock).\n\n```python\n\u003e\u003e\u003e import time\n\u003e\u003e\u003e if printer_lock.acquire():\n...     # Critical section - print stuff here.\n...     time.sleep(printer_lock.auto_release_time)\n\u003e\u003e\u003e bool(printer_lock.locked())\nFalse\n\u003e\u003e\u003e\n```\n\nIf 10 seconds isn\u0026rsquo;t enough to complete executing your critical section,\nthen you can specify your own auto release time (in seconds):\n\n```python\n\u003e\u003e\u003e printer_lock = Redlock(key='printer', masters={redis}, auto_release_time=.2)\n\u003e\u003e\u003e if printer_lock.acquire():\n...     # Critical section - print stuff here.\n...     time.sleep(printer_lock.auto_release_time / 2)\n\u003e\u003e\u003e bool(printer_lock.locked())\nTrue\n\u003e\u003e\u003e time.sleep(printer_lock.auto_release_time / 2)\n\u003e\u003e\u003e bool(printer_lock.locked())\nFalse\n\u003e\u003e\u003e\n```\n\nBy default, `.acquire()` blocks indefinitely until the lock is acquired.  You\ncan make `.acquire()` return immediately with the `blocking` argument.\n`.acquire()` returns `True` if the lock was acquired; `False` if not.\n\n```python\n\u003e\u003e\u003e printer_lock_1 = Redlock(key='printer', masters={redis}, auto_release_time=.2)\n\u003e\u003e\u003e printer_lock_2 = Redlock(key='printer', masters={redis}, auto_release_time=.2)\n\u003e\u003e\u003e printer_lock_1.acquire(blocking=False)\nTrue\n\u003e\u003e\u003e printer_lock_2.acquire(blocking=False)  # Returns immediately.\nFalse\n\u003e\u003e\u003e printer_lock_1.release()\n\u003e\u003e\u003e\n```\n\nYou can make `.acquire()` block but not indefinitely by specifying the\n`timeout` argument (in seconds):\n\n```python\n\u003e\u003e\u003e printer_lock_1.acquire()\nTrue\n\u003e\u003e\u003e printer_lock_2.acquire(timeout=printer_lock_1.auto_release_time / 2)  # Waits 100 milliseconds.\nFalse\n\u003e\u003e\u003e import contextlib\n\u003e\u003e\u003e from pottery import ReleaseUnlockedLock\n\u003e\u003e\u003e with contextlib.suppress(ReleaseUnlockedLock):\n...     printer_lock_1.release()\n\u003e\u003e\u003e\n```\n\nYou can similarly configure the Redlock context manager\u0026rsquo;s\nblocking/timeout behavior during Redlock initialization.  If the context\nmanager fails to acquire the lock, it raises the `QuorumNotAchieved` exception.\n\n```python\n\u003e\u003e\u003e import contextlib\n\u003e\u003e\u003e from pottery import QuorumNotAchieved\n\u003e\u003e\u003e printer_lock_1 = Redlock(key='printer', masters={redis}, context_manager_blocking=True, context_manager_timeout=0.2)\n\u003e\u003e\u003e printer_lock_2 = Redlock(key='printer', masters={redis}, context_manager_blocking=True, context_manager_timeout=0.2)\n\u003e\u003e\u003e with printer_lock_1:\n...     with contextlib.suppress(QuorumNotAchieved):\n...         with printer_lock_2:  # Waits 200 milliseconds; raises QuorumNotAchieved.\n...             pass\n...     print(f\"printer_lock_1 is {'locked' if printer_lock_1.locked() else 'unlocked'}\")\n...     print(f\"printer_lock_2 is {'locked' if printer_lock_2.locked() else 'unlocked'}\")\nprinter_lock_1 is locked\nprinter_lock_2 is unlocked\n\u003e\u003e\u003e\n```\n\n\n\n### \u003ca name=\"synchronize\"\u003e\u003c/a\u003esynchronize() 👯‍♀️\n\n`synchronize()` is a decorator that allows only one thread to execute a\nfunction at a time.  Under the hood, `synchronize()` uses a Redlock, so refer\nto the [Redlock documentation](#redlock) for more details.\n\nHere\u0026rsquo;s how to use `synchronize()`:\n\n```python\n\u003e\u003e\u003e from pottery import synchronize\n\u003e\u003e\u003e @synchronize(key='synchronized-func', masters={redis}, auto_release_time=1.5, blocking=True, timeout=-1)\n... def func():\n...   # Only one thread can execute this function at a time.\n...   return True\n...\n\u003e\u003e\u003e func()\nTrue\n\u003e\u003e\u003e\n```\n\n\n\n## \u003ca name=\"aioredlock\"\u003e\u003c/a\u003eAIORedlock 🔒\n\n`AIORedlock` is the asyncio implementation of Redlock, compatible with\nPython\u0026rsquo;s\n[`asyncio.Lock`](https://docs.python.org/3/library/asyncio-sync.html#lock).\n\nInstantiate an `AIORedlock` and protect a resource:\n\n```python\n\u003e\u003e\u003e import asyncio\n\u003e\u003e\u003e from redis.asyncio import Redis as AIORedis\n\u003e\u003e\u003e from pottery import AIORedlock\n\u003e\u003e\u003e async def main():\n...     aioredis = AIORedis.from_url('redis://localhost:6379/1')\n...     shower = AIORedlock(key='shower', masters={aioredis})\n...     if await shower.acquire():\n...         # Critical section - no other coroutine can enter while we hold the lock.\n...         print(f\"shower is {'occupied' if await shower.locked() else 'available'}\")\n...         await shower.release()\n...     print(f\"shower is {'occupied' if await shower.locked() else 'available'}\")\n...\n\u003e\u003e\u003e asyncio.run(main(), debug=True)\nshower is occupied\nshower is available\n\u003e\u003e\u003e\n```\n\nOr you can protect access to your resource inside a context manager:\n\n```python\n\u003e\u003e\u003e asyncio.set_event_loop(asyncio.new_event_loop())\n\u003e\u003e\u003e async def main():\n...     aioredis = AIORedis.from_url('redis://localhost:6379/1')\n...     shower = AIORedlock(key='shower', masters={aioredis})\n...     async with shower:\n...         # Critical section - no other coroutine can enter while we hold the lock.\n...         print(f\"shower is {'occupied' if await shower.locked() else 'available'}\")\n...     print(f\"shower is {'occupied' if await shower.locked() else 'available'}\")\n...\n\u003e\u003e\u003e asyncio.run(main(), debug=True)\nshower is occupied\nshower is available\n\u003e\u003e\u003e\n```\n\n\n\n## \u003ca name=\"nextid\"\u003e\u003c/a\u003eNextID 🔢\n\n`NextID` safely and reliably produces increasing IDs across threads, processes,\nand even machines, without a single point of failure.  [Rationale and algorithm\ndescription.](http://antirez.com/news/102)\n\nInstantiate an ID generator:\n\n```python\n\u003e\u003e\u003e from pottery import NextID\n\u003e\u003e\u003e tweet_ids = NextID(key='tweet-ids', masters={redis})\n\u003e\u003e\u003e\n```\n\nThe `key` argument represents the sequence (so that you can have different\nsequences for user IDs, comment IDs, etc.), and the `masters` argument\nspecifies your Redis masters across which to distribute ID generation (in\nproduction, you should have 5 Redis masters).  Now, whenever you need a user\nID, call `next()` on the ID generator:\n\n```python\n\u003e\u003e\u003e next(tweet_ids)\n1\n\u003e\u003e\u003e next(tweet_ids)\n2\n\u003e\u003e\u003e next(tweet_ids)\n3\n\u003e\u003e\u003e\n```\n\nTwo caveats:\n\n1. If many clients are generating IDs concurrently, then there may be\n   \u0026ldquo;holes\u0026rdquo; in the sequence of IDs (e.g.: 1, 2, 6, 10, 11, 21,\n   \u0026hellip;).\n2. This algorithm scales to about 5,000 IDs per second (with 5 Redis masters).\n   If you need IDs faster than that, then you may want to consider other\n   techniques.\n\n\n\n## redis_cache()\n\n`redis_cache()` is a simple lightweight unbounded function return value cache,\nsometimes called\n[\u0026ldquo;memoize\u0026rdquo;](https://en.wikipedia.org/wiki/Memoization).\n`redis_cache()` implements Python\u0026rsquo;s excellent\n[`functools.cache()`](https://docs.python.org/3/library/functools.html#functools.cache)\nAPI as closely as is feasible.  In other words, you can use `redis_cache()` the\nsame way that you use `functools.cache()`.\n\n*Limitations:*\n\n1. Arguments to the function must be hashable.\n2. Return values from the function must be JSON serializable.\n3. Just like `functools.cache()`, `redis_cache()` does not allow for a maximum\n   size, and does not evict old values, and grows unbounded.  Only use\n   `redis_cache()` in one of these cases:\n    1. Your function\u0026rsquo;s argument space has a known small cardinality.\n    2. You specify a `timeout` when calling `redis_cache()` to decorate your\n       function, to dump your _entire_ return value cache `timeout` seconds\n       after the last cache access (hit or miss).\n    3. You periodically call `.cache_clear()` to dump your _entire_ return\n       value cache.\n    4. You\u0026rsquo;re ok with your return value cache growing unbounded, and you\n       [understand the implications](https://docs.redislabs.com/latest/rs/administering/database-operations/eviction-policy/)\n       of this for your underlying Redis instance.\n\nIn general, you should only use `redis_cache()` when you want to reuse\npreviously computed values.  Accordingly, it doesn\u0026rsquo;t make sense to cache\nfunctions with side-effects or impure functions such as `time()` or `random()`.\n\nDecorate a function:\n\n```python\n\u003e\u003e\u003e import time\n\u003e\u003e\u003e from pottery import redis_cache\n\u003e\u003e\u003e @redis_cache(redis=redis, key='expensive-function-cache')\n... def expensive_function(n):\n...     time.sleep(.1)  # Simulate an expensive computation or database lookup.\n...     return n\n...\n\u003e\u003e\u003e\n```\n\nNotice the two keyword arguments to `redis_cache()`: The first is your Redis\nclient.  The second is the Redis key name for your function\u0026rsquo;s return\nvalue cache.\n\nCall your function and observe the cache hit/miss rates:\n\n```python\n\u003e\u003e\u003e expensive_function(5)\n5\n\u003e\u003e\u003e expensive_function.cache_info()\nCacheInfo(hits=0, misses=1, maxsize=None, currsize=1)\n\u003e\u003e\u003e expensive_function(5)\n5\n\u003e\u003e\u003e expensive_function.cache_info()\nCacheInfo(hits=1, misses=1, maxsize=None, currsize=1)\n\u003e\u003e\u003e expensive_function(6)\n6\n\u003e\u003e\u003e expensive_function.cache_info()\nCacheInfo(hits=1, misses=2, maxsize=None, currsize=2)\n\u003e\u003e\u003e\n```\n\nNotice that the first call to `expensive_function()` takes 1 second and results\nin a cache miss; but the second call returns almost immediately and results in\na cache hit.  This is because after the first call, `redis_cache()` cached the\nreturn value for the call when `n == 5`.\n\nYou can access your original undecorated underlying `expensive_function()` as\n`expensive_function.__wrapped__`.  This is useful for introspection, for\nbypassing the cache, or for rewrapping the original function with a different\ncache.\n\nYou can force a cache reset for a particular combination of `args`/`kwargs`\nwith `expensive_function.__bypass__`.  A call to\n`expensive_function.__bypass__(*args, **kwargs)` bypasses the cache lookup,\ncalls the original underlying function, then caches the results for future\ncalls to `expensive_function(*args, **kwargs)`.  Note that a call to\n`expensive_function.__bypass__(*args, **kwargs)` results in neither a cache hit\nnor a cache miss.\n\nFinally, clear/invalidate your function\u0026rsquo;s entire return value cache with\n`expensive_function.cache_clear()`:\n\n```python\n\u003e\u003e\u003e expensive_function.cache_info()\nCacheInfo(hits=1, misses=2, maxsize=None, currsize=2)\n\u003e\u003e\u003e expensive_function.cache_clear()\n\u003e\u003e\u003e expensive_function.cache_info()\nCacheInfo(hits=0, misses=0, maxsize=None, currsize=0)\n\u003e\u003e\u003e\n```\n\n\n\n## CachedOrderedDict\n\nThe best way that I can explain `CachedOrderedDict` is through an example\nuse-case.  Imagine that your search engine returns document IDs, which then you\nhave to hydrate into full documents via the database to return to the client.\nThe data structure used to represent such search results must have the\nfollowing properties:\n\n1. It must preserve the order of the document IDs returned by the search engine.\n2. It must map document IDs to hydrated documents.\n3. It must cache previously hydrated documents.\n\nProperties 1 and 2 are satisfied by Python\u0026rsquo;s\n[`collections.OrderedDict`](https://docs.python.org/3/library/collections.html#collections.OrderedDict).\nHowever, `CachedOrderedDict` extends Python\u0026rsquo;s `OrderedDict` to also\nsatisfy property 3.\n\nThe most common usage pattern for `CachedOrderedDict` is as follows:\n\n1. Instantiate `CachedOrderedDict` with the IDs that you must look up or\n   compute passed in as the `dict_keys` argument to the initializer.\n2. Compute and store the cache misses for future lookups.\n3. Return some representation of your `CachedOrderedDict` to the client.\n\nInstantiate a `CachedOrderedDict`:\n\n```python\n\u003e\u003e\u003e from pottery import CachedOrderedDict\n\u003e\u003e\u003e search_results_1 = CachedOrderedDict(\n...     redis_client=redis,\n...     redis_key='search-results',\n...     dict_keys=(1, 2, 3, 4, 5),\n... )\n\u003e\u003e\u003e\n```\n\nThe `redis_client` argument to the initializer is your Redis client, and the\n`redis_key` argument is the Redis key for the Redis Hash backing your cache.\nThe `dict_keys` argument represents an ordered iterable of keys to be looked up\nand automatically populated in your `CachedOrderedDict` (on cache hits), or\nthat you\u0026rsquo;ll have to compute and populate for future lookups (on cache\nmisses).  Regardless of whether keys are cache hits or misses,\n`CachedOrderedDict` preserves the order of `dict_keys` (like a list), maps\nthose keys to values (like a dict), and maintains an underlying cache for\nfuture key lookups.\n\nIn the beginning, the cache is empty, so let\u0026rsquo;s populate it:\n\n```python\n\u003e\u003e\u003e sorted(search_results_1.misses())\n[1, 2, 3, 4, 5]\n\u003e\u003e\u003e search_results_1[1] = 'one'\n\u003e\u003e\u003e search_results_1[2] = 'two'\n\u003e\u003e\u003e search_results_1[3] = 'three'\n\u003e\u003e\u003e search_results_1[4] = 'four'\n\u003e\u003e\u003e search_results_1[5] = 'five'\n\u003e\u003e\u003e sorted(search_results_1.misses())\n[]\n\u003e\u003e\u003e\n```\n\nNote that `CachedOrderedDict` preserves the order of `dict_keys`:\n\n```python\n\u003e\u003e\u003e for key, value in search_results_1.items():\n...     print(f'{key}: {value}')\n1: one\n2: two\n3: three\n4: four\n5: five\n\u003e\u003e\u003e\n```\n\nNow, let\u0026rsquo;s look at a combination of cache hits and misses:\n\n```python\n\u003e\u003e\u003e search_results_2 = CachedOrderedDict(\n...     redis_client=redis,\n...     redis_key='search-results',\n...     dict_keys=(2, 4, 6, 8, 10),\n... )\n\u003e\u003e\u003e sorted(search_results_2.misses())\n[6, 8, 10]\n\u003e\u003e\u003e search_results_2[2]\n'two'\n\u003e\u003e\u003e search_results_2[6] = 'six'\n\u003e\u003e\u003e search_results_2[8] = 'eight'\n\u003e\u003e\u003e search_results_2[10] = 'ten'\n\u003e\u003e\u003e sorted(search_results_2.misses())\n[]\n\u003e\u003e\u003e for key, value in search_results_2.items():\n...     print(f'{key}: {value}')\n2: two\n4: four\n6: six\n8: eight\n10: ten\n\u003e\u003e\u003e\n```\n\n*Limitations:*\n\n1. Keys and values must be JSON serializable.\n\n\n\n## \u003ca name=\"bloom-filters\"\u003e\u003c/a\u003eBloom filters 🌸\n\nBloom filters are a powerful data structure that help you to answer the\nquestions, _\u0026ldquo;Have I seen this element before?\u0026rdquo;_ and _\u0026ldquo;How\nmany distinct elements have I seen?\u0026rdquo;_; but not the question, _\u0026ldquo;What\nare all of the elements that I\u0026rsquo;ve seen before?\u0026rdquo;_  So think of Bloom\nfilters as Python sets that you can add elements to, use to test element\nmembership, and get the length of; but that you can\u0026rsquo;t iterate through or\nget elements back out of.\n\nBloom filters are probabilistic, which means that they can sometimes generate\nfalse positives (as in, they may report that you\u0026rsquo;ve seen a particular\nelement before even though you haven\u0026rsquo;t).  But they will never generate\nfalse negatives (so every time that they report that you haven\u0026rsquo;t seen a\nparticular element before, you really must never have seen it).  You can tune\nyour acceptable false positive probability, though at the expense of the\nstorage size and the element insertion/lookup time of your Bloom filter.\n\nCreate a `BloomFilter`:\n\n```python\n\u003e\u003e\u003e from pottery import BloomFilter\n\u003e\u003e\u003e dilberts = BloomFilter(\n...     num_elements=100,\n...     false_positives=0.01,\n...     redis=redis,\n...     key='dilberts',\n... )\n\u003e\u003e\u003e\n```\n\nHere, `num_elements` represents the number of elements that you expect to\ninsert into your `BloomFilter`, and `false_positives` represents your\nacceptable false positive probability.  Using these two parameters,\n`BloomFilter` automatically computes its own storage size and number of times\nto run its hash functions on element insertion/lookup such that it can\nguarantee a false positive rate at or below what you can tolerate, given that\nyou\u0026rsquo;re going to insert your specified number of elements.\n\nInsert an element into the `BloomFilter`:\n\n```python\n\u003e\u003e\u003e dilberts.add('rajiv')\n\u003e\u003e\u003e\n```\n\nTest for membership in the `BloomFilter`:\n\n```python\n\u003e\u003e\u003e 'rajiv' in dilberts\nTrue\n\u003e\u003e\u003e 'raj' in dilberts\nFalse\n\u003e\u003e\u003e 'dan' in dilberts\nFalse\n\u003e\u003e\u003e\n```\n\nSee how many elements we\u0026rsquo;ve inserted into the `BloomFilter`:\n\n```python\n\u003e\u003e\u003e len(dilberts)\n1\n\u003e\u003e\u003e\n```\n\nNote that `BloomFilter.__len__()` is an approximation, not an exact value,\nthough it\u0026rsquo;s quite accurate.\n\nInsert multiple elements into the `BloomFilter`:\n\n```python\n\u003e\u003e\u003e dilberts.update({'raj', 'dan'})\n\u003e\u003e\u003e\n```\n\nDo more efficient membership testing for multiple elements using\n`.contains_many()`:\n\n```python\n\u003e\u003e\u003e tuple(dilberts.contains_many('rajiv', 'raj', 'dan', 'luis'))\n(True, True, True, False)\n\u003e\u003e\u003e\n```\n\nRemove all of the elements from the `BloomFilter`:\n\n```python\n\u003e\u003e\u003e dilberts.clear()\n\u003e\u003e\u003e len(dilberts)\n0\n\u003e\u003e\u003e\n```\n\n*Limitations:*\n\n1. Elements must be JSON serializable.\n2. `len(bf)` is probabilistic in that it\u0026rsquo;s an accurate approximation.  You\n   can tune how accurate you want it to be with the `num_elements` and\n   `false_positives` arguments to `.__init__()`, at the expense of storage space\n   and insertion/lookup time.\n3. Membership testing against a Bloom filter is probabilistic in that it *may*\n   return false positives, but *never* returns false negatives.  This means that\n   if `element in bf` evaluates to `True`, then you *may* have inserted the\n   element into the Bloom filter.  But if `element in bf` evaluates to `False`,\n   then you *must not* have inserted it.  Again, you can tune accuracy with the\n   `num_elements` and `false_positives` arguments to `.__init__()`, at the\n   expense of storage space and insertion/lookup time.\n\n\n\n## \u003ca name=\"hyperloglogs\"\u003e\u003c/a\u003eHyperLogLogs 🪵\n\nHyperLogLogs are an interesting data structure designed to answer the question,\n_\u0026ldquo;How many distinct elements have I seen?\u0026rdquo;_; but not the questions,\n_\u0026ldquo;Have I seen this element before?\u0026rdquo;_ or _\u0026ldquo;What are all of the\nelements that I\u0026rsquo;ve seen before?\u0026rdquo;_  So think of HyperLogLogs as\nPython sets that you can add elements to and get the length of; but that you\ncan\u0026rsquo;t use to test element membership, iterate through, or get elements\nout of.\n\nHyperLogLogs are probabilistic, which means that they\u0026rsquo;re accurate within\na margin of error up to 2%.  However, they can reasonably accurately estimate\nthe cardinality (size) of vast datasets (like the number of unique Google\nsearches issued in a day) with a tiny amount of storage (1.5 KB).\n\nCreate a `HyperLogLog`:\n\n```python\n\u003e\u003e\u003e from pottery import HyperLogLog\n\u003e\u003e\u003e google_searches = HyperLogLog(redis=redis, key='google-searches')\n\u003e\u003e\u003e\n```\n\nInsert an element into the `HyperLogLog`:\n\n```python\n\u003e\u003e\u003e google_searches.add('sonic the hedgehog video game')\n\u003e\u003e\u003e\n```\n\nSee how many elements we\u0026rsquo;ve inserted into the `HyperLogLog`:\n\n```python\n\u003e\u003e\u003e len(google_searches)\n1\n\u003e\u003e\u003e\n```\n\nInsert multiple elements into the `HyperLogLog`:\n\n```python\n\u003e\u003e\u003e google_searches.update({\n...     'google in 1998',\n...     'minesweeper',\n...     'joey tribbiani',\n...     'wizard of oz',\n...     'rgb to hex',\n...     'pac-man',\n...     'breathing exercise',\n...     'do a barrel roll',\n...     'snake',\n... })\n\u003e\u003e\u003e len(google_searches)\n10\n\u003e\u003e\u003e\n```\n\nThrough a clever hack, we can do membership testing against a `HyperLogLog`,\neven though it was never designed for this purpose.  The way that the hack works\nis that it creates a temporary copy of the `HyperLogLog`, then inserts the\nelement that you\u0026rsquo;re running the membership test for into the temporary\ncopy.  If the insertion changes the temporary `HyperLogLog`\u0026rsquo;s cardinality,\nthen the element must not have been inserted into the original `HyperLogLog`.\n\n```python\n\u003e\u003e\u003e 'joey tribbiani' in google_searches\nTrue\n\u003e\u003e\u003e 'jennifer aniston' in google_searches\nFalse\n\u003e\u003e\u003e\n```\n\nDo more efficient membership testing for multiple elements using\n`.contains_many()`:\n\n```python\n\u003e\u003e\u003e tuple(google_searches.contains_many('joey tribbiani', 'jennifer aniston'))\n(True, False)\n\u003e\u003e\u003e\n```\n\nRemove all of the elements from the `HyperLogLog`:\n\n```python\n\u003e\u003e\u003e google_searches.clear()\n\u003e\u003e\u003e len(google_searches)\n0\n\u003e\u003e\u003e\n```\n\n*Limitations:*\n\n1. Elements must be JSON serializable.\n2. `len(hll)` is probabilistic in that it\u0026rsquo;s an accurate approximation.\n3. Membership testing against a HyperLogLog is probabilistic in that it *may*\n   return false positives, but *never* returns false negatives.  This means that\n   if `element in hll` evaluates to `True`, then you *may* have inserted the\n   element into the HyperLogLog.  But if `element in hll` evaluates to `False`,\n   then you *must not* have inserted it.\n\n\n\n## \u003ca name=\"contexttimer\"\u003e\u003c/a\u003eContextTimer ⏱️\n\n`ContextTimer` helps you easily and accurately measure elapsed time.  Note that\n`ContextTimer` measures wall (real-world) time, not CPU time; and that\n`elapsed()` returns time in milliseconds.\n\nYou can use `ContextTimer` stand-alone\u0026hellip;\n\n```python\n\u003e\u003e\u003e import time\n\u003e\u003e\u003e from pottery import ContextTimer\n\u003e\u003e\u003e timer = ContextTimer()\n\u003e\u003e\u003e timer.start()\n\u003e\u003e\u003e time.sleep(0.1)\n\u003e\u003e\u003e 100 \u003c= timer.elapsed() \u003c 200\nTrue\n\u003e\u003e\u003e timer.stop()\n\u003e\u003e\u003e time.sleep(0.1)\n\u003e\u003e\u003e 100 \u003c= timer.elapsed() \u003c 200\nTrue\n\u003e\u003e\u003e\n```\n\n\u0026hellip;or as a context manager:\n\n```python\n\u003e\u003e\u003e tests = []\n\u003e\u003e\u003e with ContextTimer() as timer:\n...     time.sleep(0.1)\n...     tests.append(100 \u003c= timer.elapsed() \u003c 200)\n\u003e\u003e\u003e time.sleep(0.1)\n\u003e\u003e\u003e tests.append(100 \u003c= timer.elapsed() \u003c 200)\n\u003e\u003e\u003e tests\n[True, True]\n\u003e\u003e\u003e\n```\n\n\n\n## Contributing\n\n### Obtain source code\n\n1. Clone the git repo:\n    1. `$ git clone git@github.com:brainix/pottery.git`\n    2. `$ cd pottery/`\n2. Install project-level dependencies:\n    1. `$ make install`\n\n### Run tests\n\n1. In one Terminal session:\n    1. `$ cd pottery/`\n    2. `$ redis-server`\n2. In a second Terminal session:\n    1. `$ cd pottery/`\n    2. `$ make test`\n    3. `$ make test-readme`\n\n`make test` runs all of the unit tests as well as the coverage test.  However,\nsometimes, when debugging, it can be useful to run an individual test module,\nclass, or method:\n\n1. In one Terminal session:\n    1. `$ cd pottery/`\n    2. `$ redis-server`\n2. In a second Terminal session:\n    1. Run a test module with `$ make test tests=tests.test_dict`\n    2. Run a test class with: `$ make test tests=tests.test_dict.DictTests`\n    3. Run a test method with: `$ make test tests=tests.test_dict.DictTests.test_keyexistserror`\n\n`make test-readme` doctests the Python code examples in this README to ensure\nthat they\u0026rsquo;re correct.\n","funding_links":[],"categories":["Python"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbrainix%2Fpottery","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fbrainix%2Fpottery","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbrainix%2Fpottery/lists"}