{"id":35693665,"url":"https://github.com/awolverp/cachebox","last_synced_at":"2026-01-26T15:34:54.909Z","repository":{"id":224898509,"uuid":"764198112","full_name":"awolverp/cachebox","owner":"awolverp","description":"The fastest memoizing and caching Python library written in Rust.","archived":false,"fork":false,"pushed_at":"2025-12-31T12:06:03.000Z","size":1958,"stargazers_count":384,"open_issues_count":1,"forks_count":8,"subscribers_count":5,"default_branch":"main","last_synced_at":"2026-01-04T09:14:34.541Z","etag":null,"topics":["cache","cachebox","caching","in-memory-caching","memoization","memoizing","pyo3","python","rust"],"latest_commit_sha":null,"homepage":"https://pypi.org/project/cachebox/","language":"Rust","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/awolverp.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2024-02-27T16:48:17.000Z","updated_at":"2026-01-04T08:10:57.000Z","dependencies_parsed_at":"2024-05-30T10:10:23.446Z","dependency_job_id":"dbd400c0-a83e-404c-bb56-e785a10a72f8","html_url":"https://github.com/awolverp/cachebox","commit_stats":null,"previous_names":["awolverp/cachebox"],"tags_count":48,"template":false,"template_full_name":null,"purl":"pkg:github/awolverp/cachebox","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/awolverp%2Fcachebox","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/awolverp%2Fcachebox/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/awolverp%2Fcachebox/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/awolverp%2Fcachebox/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/awolverp","download_url":"https://codeload.github.com/awolverp/cachebox/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/awolverp%2Fcachebox/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":28781525,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-01-26T13:55:28.044Z","status":"ssl_error","status_checked_at":"2026-01-26T13:55:26.068Z","response_time":59,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["cache","cachebox","caching","in-memory-caching","memoization","memoizing","pyo3","python","rust"],"created_at":"2026-01-06T00:00:53.682Z","updated_at":"2026-01-26T15:34:54.902Z","avatar_url":"https://github.com/awolverp.png","language":"Rust","readme":"\u003cdiv align=\"center\"\u003e\n\n# Cachebox\n\n*The fastest caching Python library written in Rust*\n\n[**Releases**](https://github.com/awolverp/cachebox/releases) | \n[**Benchmarks**](https://github.com/awolverp/cachebox-benchmark) | \n[**Issues**](https://github.com/awolverp/cachebox/issues/new)\n\n[![License](https://img.shields.io/github/license/awolverp/cachebox.svg?style=flat-square)](https://github.com/awolverp/cachebox/blob/main/LICENSE)\n[![Release](https://img.shields.io/github/v/release/awolverp/cachebox.svg?style=flat-square)](https://github.com/awolverp/cachebox/releases)\n[![Python Versions](https://img.shields.io/pypi/pyversions/cachebox.svg?style=flat-square)](https://pypi.org/project/cachebox/)\n[![Downloads](https://img.shields.io/pypi/dm/cachebox?style=flat-square\u0026color=%23314bb5)](https://pepy.tech/projects/cachebox)\n\n\u003c/div\u003e\n\n-------\n\n### What does it do?\nYou can easily and powerfully perform caching operations in Python as fast as possible.\nThis can make your application very faster and it's a good choice in big applications.\n**Ideal for optimizing large-scale applications** with efficient, low-overhead caching.\n\n**Key Features:**\n- 🚀 Extremely fast (10-50x faster than other caching libraries -- [*benchmarks*](https://github.com/awolverp/cachebox-benchmark))\n- 📊 Minimal memory footprint (50% of standard dictionary memory usage)\n- 🔥 Full-featured and user-friendly\n- 🧶 Completely thread-safe\n- 🔧 Tested and correct\n- **\\[R\\]** written in Rust for maximum performance\n- 🤝 Compatible with Python 3.9+ (PyPy and CPython)\n- 📦 Supports 7 advanced caching algorithms\n\n### Page Contents\n- ❓ [**When i need caching and cachebox**](#when-i-need-caching-and-cachebox)\n- 🌟 [**Why `cachebox`**](#why-cachebox)\n- 🔧 [**Installation**](#installation)\n- 💡 [**Preview**](#examples)\n- 🎓 [**Getting started**](#getting-started)\n- ✏️ [**Incompatible changes**](#%EF%B8%8F-incompatible-changes)\n- 📌 [**Tips \u0026 Notes**](#tips-and-notes)\n\n### When i need caching and cachebox\n- 📈 **Frequently Data Access** \\\n  If you need to access the same data multiple times, caching can help reduce the number of database queries or API calls, improving performance.\n\n- 💎 **Expensive Operations** \\\n  If you have operations that are computationally expensive, caching can help reduce the number of times these operations need to be performed.\n\n- 🚗 **High Traffic Scenarios** \\\n  If your application has high user traffic, caching can help reduce the load on your server by reducing the number of requests that need to be processed.\n\n- #️⃣ **Web Page Rendring** \\\n  If you are rendering web pages, caching can help reduce the time it takes to generate the page by caching the results of expensive operations. Caching HTML pages can speed up the delivery of static content.\n\n- 🚧 **Rate Limiting** \\\n  If you have a rate limiting system in place, caching can help reduce the number of requests that need to be processed by the rate limiter. Also, caching can help you to manage rate limits imposed by third-party APIs by reducing the number of requests sent.\n\n- 🤖 **Machine Learning Models** \\\n  If your application frequently makes predictions using the same input data, caching the results can save computation time.\n\n### Why cachebox?\n- **⚡ Rust** \\\nIt uses *Rust* language to has high-performance.\n\n- **🧮 SwissTable** \\\nIt uses Google's high-performance SwissTable hash map. thanks to [hashbrown](https://github.com/rust-lang/hashbrown).\n\n- **✨ Low memory usage** \\\nIt has very low memory usage.\n\n- **⭐ Zero Dependency** \\\nAs we said, `cachebox` written in Rust so you don't have to install any other dependecies.\n\n- **🧶 Thread safe** \\\nIt's completely thread-safe and uses locks to prevent problems.\n\n- **👌 Easy To Use** \\\nYou only need to import it and choice your implementation to use and behave with it like a dictionary.\n\n- **🚫 Avoids Cache Stampede** \\\nIt avoids [cache stampede](https://en.wikipedia.org/wiki/Cache_stampede) by using a distributed lock system.\n\n\n## Installation\ncachebox is installable by `pip`:\n```bash\npip3 install -U cachebox\n```\n\n\u003e [!WARNING]\\\n\u003e The new version v5 has some incompatible with v4, for more info please see [Incompatible changes](#incompatible-changes)\n\n## Examples\nThe simplest example of **cachebox** could look like this:\n```python\nimport cachebox\n\n# Like functools.lru_cache, If maxsize is set to 0, the cache can grow without bound and limit.\n@cachebox.cached(cachebox.FIFOCache(maxsize=128))\ndef factorial(number: int) -\u003e int:\n    fact = 1\n    for num in range(2, number + 1):\n        fact *= num\n    return fact\n\nassert factorial(5) == 125\nassert len(factorial.cache) == 1\n\n# Async are also supported\n@cachebox.cached(cachebox.LRUCache(maxsize=128))\nasync def make_request(method: str, url: str) -\u003e dict:\n    response = await client.request(method, url)\n    return response.json()\n```\n\nAlso, unlike functools.lru_cache and other caching libraries, cachebox can copy `dict`, `list`, and `set` objects.\n```python\n@cachebox.cached(cachebox.LRUCache(maxsize=128))\ndef make_dict(name: str, age: int) -\u003e dict:\n   return {\"name\": name, \"age\": age}\n\u003e\nd = make_dict(\"cachebox\", 10)\nassert d == {\"name\": \"cachebox\", \"age\": 10}\nd[\"new-key\"] = \"new-value\"\n\nd2 = make_dict(\"cachebox\", 10)\n# `d2` will be `{\"name\": \"cachebox\", \"age\": 10, \"new-key\": \"new-value\"}` if you use other libraries\nassert d2 == {\"name\": \"cachebox\", \"age\": 10}\n```\n\nYou can use cache alghoritms without `cached` decorator -- just import what cache alghoritms you want and use it like a dictionary.\n```python\nfrom cachebox import FIFOCache\n\ncache = FIFOCache(maxsize=128)\ncache[\"key\"] = \"value\"\nassert cache[\"key\"] == \"value\"\n\n# You can also use `cache.get(key, default)`\nassert cache.get(\"key\") == \"value\"\n```\n\n## Getting started\nThere are 3 useful functions:\n- [**cached**](#cached--decorator): a decorator that helps you to cache your functions and calculations with a lot of options.\n- [**is_cached**](#is_cached--function): check if a function/method cached by cachebox or not\n\nAnd 9 classes:\n- [**BaseCacheImpl**](#basecacheimpl-️-class): base-class for all classes.\n- [**Cache**](#cache-️-class): A simple cache that has no algorithm; this is only a hashmap.\n- [**FIFOCache**](#fifocache-️-class): the FIFO cache will remove the element that has been in the cache the longest.\n- [**RRCache**](#rrcache-️-class): the RR cache will choice randomly element to remove it to make space when necessary.\n- [**LRUCache**](#lrucache-️-class): the LRU cache will remove the element in the cache that has not been accessed in the longest time.\n- [**LFUCache**](#lfucache-️-class): the LFU cache will remove the element in the cache that has been accessed the least, regardless of time.\n- [**TTLCache**](#ttlcache-️-class): the TTL cache will automatically remove the element in the cache that has expired.\n- [**VTTLCache**](#vttlcache-️-class): the TTL cache will automatically remove the element in the cache that has expired when need.\n- [**Frozen**](#frozen-️-class): you can use this class for freezing your caches.\n\nYou only need to import the class which you want, and behave with it like a dictionary (except for [VTTLCache](#vttlcache-️-class), this have some differences)\n\nThere are some examples for you with different methods for introducing those.\n**All the methods you will see in the examples are common across all classes (except for a few of them).**\n\n* * *\n\n### `cached` (🎀 decorator)\nDecorator to wrap a function with a memoizing callable that saves results in a cache.\n\n**Parameters:**\n- `cache`: Specifies a cache that handles and stores the results. if `None` or `dict`, `FIFOCache` will be used.\n\n- `key_maker`: Specifies a function that will be called with the same positional and keyword\n               arguments as the wrapped function itself, and which has to return a suitable\n               cache key (must be hashable).\n\n- `clear_reuse`: The wrapped function has a function named `clear_cache` that uses `cache.clear`\n                 method to clear the cache. This parameter will be passed to cache's `clear` method.\n\n- `callback`: Every time the `cache` is used, callback is also called.\n              The callback arguments are: event number (see `EVENT_MISS` or `EVENT_HIT` variables), key, and then result.\n\n- `copy_level`: The wrapped function always copies the result of your function and then returns it.\n                This parameter specifies that the wrapped function has to copy which type of results.\n                `0` means \"never copy\", `1` means \"only copy `dict`, `list`, and `set` results\" and\n                `2` means \"always copy the results\".\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003eExamples\u003c/b\u003e\u003c/summary\u003e\n\n\nA simple example:\n```python\nimport cachebox\n\n@cachebox.cached(cachebox.LRUCache(128))\ndef sum_as_string(a, b):\n    return str(a+b)\n\nassert sum_as_string(1, 2) == \"3\"\n\nassert len(sum_as_string.cache) == 1\nsum_as_string.cache_clear()\nassert len(sum_as_string.cache) == 0\n```\n\nA key_maker example:\n```python\nimport cachebox\n\ndef simple_key_maker(args: tuple, kwds: dict):\n    return args[0].path\n\n# Async methods are supported\n@cachebox.cached(cachebox.LRUCache(128), key_maker=simple_key_maker)\nasync def request_handler(request: Request):\n    return Response(\"hello man\")\n```\n\nA typed key_maker example:\n```python\nimport cachebox\n\n@cachebox.cached(cachebox.LRUCache(128), key_maker=cachebox.make_typed_key)\ndef sum_as_string(a, b):\n    return str(a+b)\n\nsum_as_string(1.0, 1)\nsum_as_string(1, 1)\nprint(len(sum_as_string.cache)) # 2\n```\n\nYou have also manage functions' caches with `.cache` attribute as you saw in examples.\nAlso there're more attributes and methods you can use:\n```python\nimport cachebox\n\n@cachebox.cached(cachebox.LRUCache(0))\ndef sum_as_string(a, b):\n    return str(a+b)\n\nprint(sum_as_string.cache)\n# LRUCache(0 / 9223372036854775807, capacity=0)\n\nprint(sum_as_string.cache_info())\n# CacheInfo(hits=0, misses=0, maxsize=9223372036854775807, length=0, memory=8)\n\n# `.cache_clear()` clears the cache\nsum_as_string.cache_clear()\n```\n\nmethod example: *(Added in v5.1.0)*\n```python\nimport cachebox\n\nclass Example:\n    def __init__(self, num) -\u003e None:\n        self.num = num\n        self._cache = cachebox.TTLCache(20, 10)\n\n    @cachebox.cached(lambda self: self._cache)\n    def method(self, char: str):\n        return char * self.num\n\nex = Example(10)\nassert ex.method(\"a\") == \"a\" * 10\n```\n\ncallback example: *(Added in v4.2.0)*\n```python\nimport cachebox\n\ndef callback_func(event: int, key, value):\n    if event == cachebox.EVENT_MISS:\n        print(\"callback_func: miss event\", key, value)\n    elif event == cachebox.EVENT_HIT:\n        print(\"callback_func: hit event\", key, value)\n    else:\n        # unreachable code\n        raise NotImplementedError\n\n@cachebox.cached(cachebox.LRUCache(0), callback=callback_func)\ndef func(a, b):\n    return a + b\n\nassert func(1, 2) == 3\n# callback_func: miss event (1, 2) 3\n\nassert func(1, 2) == 3 # hit\n# callback_func: hit event (1, 2) 3\n\nassert func(1, 2) == 3 # hit again\n# callback_func: hit event (1, 2) 3\n\nassert func(5, 4) == 9\n# callback_func: miss event (5, 4) 9\n```\n\n\u003c/details\u003e\n\n\u003e [!TIP]\\\n\u003e There's a new feature **since `v4.1.0`** that you can tell to a cached function that don't use cache for a call:\n\u003e ```python\n\u003e # with `cachebox__ignore=True` parameter, cachebox does not use cache and only calls the function and returns its result.\n\u003e sum_as_string(10, 20, cachebox__ignore=True)\n\u003e ```\n\n* * *\n\n### `cachedmethod` (🎀 decorator)\nthis is excatly works like `cached()`, but ignores `self` parameters in hashing and key making.\n\n\u003e [!WARNING]\\\n\u003e This function has been deprecated since `v5.1.0`, use `cached` function instead.\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003eExample\u003c/b\u003e\u003c/summary\u003e\n\n```python\nimport cachebox\n\nclass MyClass:\n    @cachebox.cachedmethod(cachebox.TTLCache(0, ttl=10))\n    def my_method(self, name: str):\n        return \"Hello, \" + name + \"!\"\n\nc = MyClass()\nc.my_method()\n```\n\n\u003c/details\u003e\n\n* * *\n\n### `is_cached` (📦 function)\nChecks that a function/method is cached by cachebox or not.\n\n**Parameters:**\n- `func`: The function/method to check.\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003eExample\u003c/b\u003e\u003c/summary\u003e\n\n```python\nimport cachebox\n\n@cachebox.cached(cachebox.FIFOCache(0))\ndef func():\n    pass\n\nassert cachebox.is_cached(func)\n```\n\n\u003c/details\u003e\n\n* * *\n\n### `BaseCacheImpl` (🏗️ class)\nBase implementation for cache classes in the cachebox library.\n\nThis abstract base class defines the generic structure for cache implementations,\nsupporting different key and value types through generic type parameters.\nServes as a foundation for specific cache variants like Cache and FIFOCache.\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003eExample\u003c/b\u003e\u003c/summary\u003e\n\n```python\nimport cachebox\n\n# subclass\nclass ClassName(cachebox.BaseCacheImpl):\n    ...\n\n# type-hint\ndef func(cache: BaseCacheImpl):\n    ...\n\n# isinstance\ncache = cachebox.LFUCache(0)\nassert isinstance(cache, cachebox.BaseCacheImpl)\n```\n\n\u003c/details\u003e\n\n* * *\n\n### `Cache` (🏗️ class)\nA thread-safe, memory-efficient hashmap-like cache with configurable maximum size.\n\nProvides a flexible key-value storage mechanism with:\n- Configurable maximum size (zero means unlimited)\n- Lower memory usage compared to standard dict\n- Thread-safe operations\n- Useful memory management methods\n\nSupports initialization with optional initial data and capacity,\nand provides dictionary-like access with additional cache-specific operations.\n\n\u003e [!TIP]\\\n\u003e Differs from standard `dict` by:\n\u003e - it is thread-safe and unordered, while dict isn't thread-safe and ordered (Python 3.6+).\n\u003e - it uses very lower memory than dict.\n\u003e - it supports useful and new methods for managing memory, while dict does not.\n\u003e - it does not support popitem, while dict does.\n\u003e - You can limit the size of Cache, but you cannot for dict.\n\n|              | get   | insert  | delete | popitem |\n| ------------ | ----- | ------- | ------ | ------- |\n| Worse-case   | O(1)  | O(1)    | O(1)   | N/A     |\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003eExample\u003c/b\u003e\u003c/summary\u003e\n\n```python\nfrom cachebox import Cache\n\n# These parameters are common in classes:\n# By `maxsize` param, you can specify the limit size of the cache ( zero means infinity ); this is unchangable.\n# By `iterable` param, you can create cache from a dict or an iterable.\n# If `capacity` param is given, cache attempts to allocate a new hash table with at\n# least enough capacity for inserting the given number of elements without reallocating.\ncache = Cache(maxsize=100, iterable=None, capacity=100)\n\n# you can behave with it like a dictionary\ncache[\"key\"] = \"value\"\n# or you can use `.insert(key, value)` instead of that (recommended)\ncache.insert(\"key\", \"value\")\n\nprint(cache[\"key\"]) # value\n\ndel cache[\"key\"]\ncache[\"key\"] # KeyError: key\n\n# cachebox.Cache does not have any policy, so will raise OverflowError if reached the bound.\ncache.update({i:i for i in range(200)})\n# OverflowError: The cache has reached the bound.\n```\n\n\u003c/details\u003e\n\n* * *\n\n### `FIFOCache` (🏗️ class)\nA First-In-First-Out (FIFO) cache implementation with configurable maximum size and optional initial capacity.\n\nThis cache provides a fixed-size container that automatically removes the oldest items when the maximum size is reached.\n\n**Key features**:\n- Deterministic item eviction order (oldest items removed first)\n- Efficient key-value storage and retrieval\n- Supports dictionary-like operations\n- Allows optional initial data population\n\n|              | get   | insert  | delete       | popitem |\n| ------------ | ----- | ------- | ------------- | ------- |\n| Worse-case   | O(1)  | O(1) | O(min(i, n-i)) | O(1)  |\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003eExample\u003c/b\u003e\u003c/summary\u003e\n\n```python\nfrom cachebox import FIFOCache\n\ncache = FIFOCache(5, {i:i*2 for i in range(5)})\n\nprint(len(cache)) # 5\ncache[\"new-key\"] = \"new-value\"\nprint(len(cache)) # 5\n\nprint(cache.get(3, \"default-val\")) # 6\nprint(cache.get(6, \"default-val\")) # default-val\n\nprint(cache.popitem()) # (1, 2)\n\n# insert method returns a value:\n# - If the cache did not have this key present, None is returned.\n# - If the cache did have this key present, the value is updated, and the old value is returned.\nprint(cache.insert(3, \"val\")) # 6\nprint(cache.insert(\"new-key\", \"val\")) # None\n\n# Returns the first key in cache; this is the one which will be removed by `popitem()`.\nprint(cache.first())\n```\n\n\u003c/details\u003e\n\n* * *\n\n### `RRCache` (🏗️ class)\nA thread-safe cache implementation with Random Replacement (RR) policy.\n\nThis cache randomly selects and removes elements when the cache reaches its maximum size,\nensuring a simple and efficient caching mechanism with configurable capacity.\n\nSupports operations like insertion, retrieval, deletion, and iteration with O(1) complexity.\n\n|              | get   | insert  | delete | popitem |\n| ------------ | ----- | ------- | ------ | ------- |\n| Worse-case   | O(1)  | O(1)    | O(1)   | O(1)    |\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003eExample\u003c/b\u003e\u003c/summary\u003e\n\n```python\nfrom cachebox import RRCache\n\ncache = RRCache(10, {i:i for i in range(10)})\nprint(cache.is_full()) # True\nprint(cache.is_empty()) # False\n\n# Returns the number of elements the map can hold without reallocating.\nprint(cache.capacity()) # 28\n\n# Shrinks the cache to fit len(self) elements.\ncache.shrink_to_fit()\nprint(cache.capacity()) # 10\n\n# Returns a random key\nprint(cache.random_key()) # 4\n```\n\n\u003c/details\u003e\n\n* * *\n\n### `LRUCache` (🏗️ class)\nThread-safe Least Recently Used (LRU) cache implementation.\n\nProvides a cache that automatically removes the least recently used items when\nthe cache reaches its maximum size. Supports various operations like insertion,\nretrieval, and management of cached items with configurable maximum size and\ninitial capacity.\n\n|              | get   | insert  | delete(i) | popitem |\n| ------------ | ----- | ------- | --------- | ------- |\n| Worse-case   | O(1)~ | O(1)~   | O(1)~ | O(1)~ |\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003eExample\u003c/b\u003e\u003c/summary\u003e\n\n```python\nfrom cachebox import LRUCache\n\ncache = LRUCache(0, {i:i*2 for i in range(10)})\n\n# access `1`\nprint(cache[0]) # 0\nprint(cache.least_recently_used()) # 1\nprint(cache.popitem()) # (1, 2)\n\n# .peek() searches for a key-value in the cache and returns it without moving the key to recently used.\nprint(cache.peek(2)) # 4\nprint(cache.popitem()) # (3, 6)\n\n# Does the `popitem()` `n` times and returns count of removed items.\nprint(cache.drain(5)) # 5\n```\n\n\u003c/details\u003e\n\n* * *\n\n### `LFUCache` (🏗️ class)\nA thread-safe Least Frequently Used (LFU) cache implementation.\n\nThis cache removes elements that have been accessed the least number of times,\nregardless of their access time. It provides methods for inserting, retrieving,\nand managing cache entries with configurable maximum size and initial capacity.\n\n|              | get   | insert  | delete(i) | popitem |\n| ------------ | ----- | ------- | --------- | ------- |\n| Worse-case   | O(1)~ | O(1)~   | O(min(i, n-i)) | O(1)~ |\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003eExample\u003c/b\u003e\u003c/summary\u003e\n\n```python\nfrom cachebox import LFUCache\n\ncache = cachebox.LFUCache(5)\ncache.insert('first', 'A')\ncache.insert('second', 'B')\n\n# access 'first' twice\ncache['first']\ncache['first']\n\n# access 'second' once\ncache['second']\n\nassert cache.least_frequently_used() == 'second'\nassert cache.least_frequently_used(2) is None # 2 is out of range\n\nfor item in cache.items_with_frequency():\n    print(item)\n# ('second', 'B', 1)\n# ('first', 'A', 2)\n```\n\n\u003c/details\u003e\n\n* * *\n\n### `TTLCache` (🏗️ class)\nA thread-safe Time-To-Live (TTL) cache implementation with configurable maximum size and expiration.\n\nThis cache automatically removes elements that have expired based on their time-to-live setting.\nSupports various operations like insertion, retrieval, and iteration.\n\n|              | get   | insert  | delete(i) | popitem |\n| ------------ | ----- | ------- | --------- | ------- |\n| Worse-case   | O(1)~ | O(1)~   | O(min(i, n-i)) | O(n) |\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003eExample\u003c/b\u003e\u003c/summary\u003e\n\n```python\nfrom cachebox import TTLCache\nimport time\n\n# The `ttl` param specifies the time-to-live value for each element in cache (in seconds); cannot be zero or negative.\ncache = TTLCache(0, ttl=2)\ncache.update({i:str(i) for i in range(10)})\n\nprint(cache.get_with_expire(2)) # ('2', 1.99)\n\n# Returns the oldest key in cache; this is the one which will be removed by `popitem()` \nprint(cache.first()) # 0\n\ncache[\"mykey\"] = \"value\"\ntime.sleep(2)\ncache[\"mykey\"] # KeyError\n```\n\n\u003c/details\u003e\n\n* * *\n\n### `VTTLCache` (🏗️ class)\nA thread-safe, time-to-live (TTL) cache implementation with per-key expiration policy.\n\nThis cache allows storing key-value pairs with optional expiration times. When an item expires,\nit is automatically removed from the cache. The cache supports a maximum size and provides\nvarious methods for inserting, retrieving, and managing cached items.\n\nKey features:\n- Per-key time-to-live (TTL) support\n- Configurable maximum cache size\n- Thread-safe operations\n- Automatic expiration of items\n\nSupports dictionary-like operations such as get, insert, update, and iteration.\n\n|              | get   | insert  | delete(i) | popitem |\n| ------------ | ----- | ------- | --------- | ------- |\n| Worse-case   | O(1)~ | O(1)~   | O(min(i, n-i)) | O(1)~ |\n\n\u003e [!TIP]\\\n\u003e `VTTLCache` vs `TTLCache`:\n\u003e - In `VTTLCache` each item has its own unique time-to-live, unlike `TTLCache`.\n\u003e - `VTTLCache` is generally slower than `TTLCache`.\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003eExample\u003c/b\u003e\u003c/summary\u003e\n\n```python\nfrom cachebox import VTTLCache\nimport time\n\n# The `ttl` param specifies the time-to-live value for `iterable` (in seconds); cannot be zero or negative.\ncache = VTTLCache(100, iterable={i:i for i in range(4)}, ttl=3)\nprint(len(cache)) # 4\ntime.sleep(3)\nprint(len(cache)) # 0\n\n# The \"key1\" is exists for 5 seconds\ncache.insert(\"key1\", \"value\", ttl=5)\n# The \"key2\" is exists for 2 seconds\ncache.insert(\"key2\", \"value\", ttl=2)\n\ntime.sleep(2)\n# \"key1\" is exists for 3 seconds\nprint(cache.get(\"key1\")) # value\n\n# \"key2\" has expired\nprint(cache.get(\"key2\")) # None\n```\n\n\u003c/details\u003e\n\n* * *\n\n### `Frozen` (🏗️ class)\n**This is not a cache**; This is a wrapper class that prevents modifications to an underlying cache implementation.\n\nThis class provides a read-only view of a cache, optionally allowing silent\nsuppression of modification attempts instead of raising exceptions.\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003eExample\u003c/b\u003e\u003c/summary\u003e\n\n```python\nfrom cachebox import Frozen, FIFOCache\n\ncache = FIFOCache(10, {1:1, 2:2, 3:3})\n\n# parameters:\n#   cls: your cache\n#   ignore: If False, will raise TypeError if anyone try to change cache. will do nothing otherwise.\nfrozen = Frozen(cache, ignore=True)\nprint(frozen[1]) # 1\nprint(len(frozen)) # 3\n\n# Frozen ignores this action and do nothing\nfrozen.insert(\"key\", \"value\")\nprint(len(frozen)) # 3\n\n# Let's try with ignore=False\nfrozen = Frozen(cache, ignore=False)\n\nfrozen.insert(\"key\", \"value\")\n# TypeError: This cache is frozen.\n```\n\n\u003c/details\u003e\n\n\u003e [!NOTE]\\\n\u003e The **Frozen** class can't prevent expiring in [TTLCache](#ttlcache) or [VTTLCache](#vttlcache).\n\u003e\n\u003e For example:\n\u003e ```python\n\u003e cache = TTLCache(0, ttl=3, iterable={i:i for i in range(10)})\n\u003e frozen = Frozen(cache)\n\u003e \n\u003e time.sleep(3)\n\u003e print(len(frozen)) # 0\n\u003e ```\n\n## ⚠️ Incompatible Changes\nThese are changes that are not compatible with the previous version:\n\n**You can see more info about changes in [Changelog](CHANGELOG.md).**\n\n#### CacheInfo's cachememory attribute renamed!\nThe `CacheInfo.cachememory` was renamed to `CacheInfo.memory`.\n\n```python\n@cachebox.cached({})\ndef func(a: int, b: int) -\u003e str:\n    ...\n\ninfo = func.cache_info()\n\n# Older versions\nprint(info.cachememory)\n\n# New version\nprint(info.memory)\n```\n\n#### Errors in the `__eq__` method will not be ignored!\nNow the errors which occurred while doing `__eq__` operations will not be ignored.\n\n```python\nclass A:\n    def __hash__(self):\n        return 1\n\n    def __eq__(self, other):\n        raise NotImplementedError(\"not implemeneted\")\n\ncache = cachebox.FIFOCache(0, {A(): 10})\n\n# Older versions:\ncache[A()] # =\u003e KeyError\n\n# New version:\ncache[A()]\n# Traceback (most recent call last):\n# File \"script.py\", line 11, in \u003cmodule\u003e\n#    cache[A()]\n#    ~~~~~^^^^^\n#  File \"script.py\", line 7, in __eq__\n#   raise NotImplementedError(\"not implemeneted\")\n# NotImplementedError: not implemeneted\n```\n\n#### Cache comparisons will not be strict!\nIn older versions, cache comparisons depended on the caching algorithm. Now, they work just like dictionary comparisons.\n\n```python\ncache1 = cachebox.FIFOCache(10)\ncache2 = cachebox.FIFOCache(10)\n\ncache1.insert(1, 'first')\ncache1.insert(2, 'second')\n\ncache2.insert(2, 'second')\ncache2.insert(1, 'first')\n\n# Older versions:\ncache1 == cache2 # False\n\n# New version:\ncache1 == cache2 # True\n```\n\n## Tips and Notes\n#### How to save caches in files?\nthere's no built-in file-based implementation, but you can use `pickle` for saving caches in files. For example:\n```python\nimport cachebox\nimport pickle\nc = cachebox.LRUCache(100, {i:i for i in range(78)})\n\nwith open(\"file\", \"wb\") as fd:\n    pickle.dump(c, fd)\n\nwith open(\"file\", \"rb\") as fd:\n    loaded = pickle.load(fd)\n\nassert c == loaded\nassert c.capacity() == loaded.capacity()\n```\n\n\u003e [!TIP]\\\n\u003e For more, see this [issue](https://github.com/awolverp/cachebox/issues/8).\n\n* * *\n\n#### How to copy the caches?\nYou can use `copy.deepcopy` or `cache.copy` for copying caches. For example:\n```python\nimport cachebox\ncache = cachebox.LRUCache(100, {i:i for i in range(78)})\n\n# shallow copy\nshallow = cache.copy()\n\n# deep copy\nimport copy\ndeep = copy.deepcopy(cache)\n```\n\n## License\nThis repository is licensed under the [MIT License](LICENSE)\n","funding_links":[],"categories":["Performance \u0026 Caching"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fawolverp%2Fcachebox","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fawolverp%2Fcachebox","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fawolverp%2Fcachebox/lists"}