{"id":28387316,"url":"https://github.com/digitalruby/simplecache","last_synced_at":"2025-10-11T05:33:11.099Z","repository":{"id":42517982,"uuid":"497123526","full_name":"DigitalRuby/SimpleCache","owner":"DigitalRuby","description":"Simple yet Powerful L1/L2/L3 caching in .NET. Memory -\u003e Local File -\u003e Redis. I am open to suggestions for enhancements, email support@digitalruby.com.","archived":false,"fork":false,"pushed_at":"2025-02-11T18:07:44.000Z","size":313,"stargazers_count":9,"open_issues_count":0,"forks_count":0,"subscribers_count":1,"default_branch":"main","last_synced_at":"2025-09-16T14:40:23.663Z","etag":null,"topics":["cache","caching","csharp","disk","dotnet","file","io","l1","l2","l3","layers","performance","ram","redis"],"latest_commit_sha":null,"homepage":"https://www.digitalruby.com","language":"C#","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/DigitalRuby.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":".github/FUNDING.yml","license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null},"funding":{"github":"jjxtra","patreon":null,"open_collective":null,"ko_fi":null,"tidelift":null,"community_bridge":null,"liberapay":null,"issuehunt":null,"otechie":null,"lfx_crowdfunding":null,"custom":null}},"created_at":"2022-05-27T20:04:31.000Z","updated_at":"2025-02-11T18:07:47.000Z","dependencies_parsed_at":"2025-06-26T20:32:56.402Z","dependency_job_id":"5daa6c9d-cdc9-4bc2-b302-9fffef92812d","html_url":"https://github.com/DigitalRuby/SimpleCache","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/DigitalRuby/SimpleCache","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DigitalRuby%2FSimpleCache","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DigitalRuby%2FSimpleCache/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DigitalRuby%2FSimpleCache/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DigitalRuby%2FSimpleCache/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/DigitalRuby","download_url":"https://codeload.github.com/DigitalRuby/SimpleCache/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DigitalRuby%2FSimpleCache/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":279006319,"owners_count":26084085,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-10-11T02:00:06.511Z","response_time":55,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["cache","caching","csharp","disk","dotnet","file","io","l1","l2","l3","layers","performance","ram","redis"],"created_at":"2025-05-30T17:11:04.608Z","updated_at":"2025-10-11T05:33:11.082Z","avatar_url":"https://github.com/DigitalRuby.png","language":"C#","readme":"\u003ch1 align=\"center\"\u003eSimpleCache\u003c/h1\u003e\n\nSimpleCache removes the headache and pain of getting caching right in .NET.\n\n**Features**:\n- Simple and intuitive API using generics and tasks.\n- Cache storm/stampede prevention (per machine) using `GetOrCreateAsync`. Your factory is guaranteed to execute only once per key, regardless of how many callers stack on it.\n- Exceptions are not cached.\n- Thread safe.\n- Three layers: RAM, disk and redis. Disk and redis can be disabled if desired.\n- Null and memory versions of both file and redis caches available for mocking.\n- Excellent test coverage.\n- Optimized usage of all your resources. Simple cache has three layers to give you maximum performance: RAM, disk and redis.\n- Built in json-lz4 serializer for file and redis caching for smaller values and minimal implementation pain.\n- You can create your own serializer if you want to use protobuf or other compression options.\n- Redis key remove/change/add notifications to keep all your servers in sync.\n\n## Setup and Configuration\n\n```cs\nusing DigitalRuby.SimpleCache;\n\n// create your builder, add simple cache\nvar builder = WebApplication.CreateBuilder(args);\n\n// bind to IConfiguration, see the DigitalRuby.SimpleCache.Sandbox project appsettings.json for an example\nbuilder.Services.AddSimpleCache(builder.Configuration);\n\n// you can also create a builder with a strongly typed configuration\nbuilder.Services.AddSimpleCache(new SimpleCacheConfiguration\n{\n    // fill in values here\n});\n```\n\nThe configuration options are:\n\n```json\n{\n  \"DigitalRuby.SimpleCache\":\n  {\n    /*\n    optional, cache key prefix, by default the entry assembly name is used\n    you can set this to an empty string to share keys between services that are using the same redis cluster\n    */\n    \"KeyPrefix\": \"sandbox\",\n\n    /* optional, override max memory size (in megabytes). Default is 1024. */\n    \"MaxMemorySize\": 2048,\n\n    /* optional redis connection string */\n    \"RedisConnectionString\": \"localhost:6379\",\n\n    /*\n    opptional, override file cache directory, set to empty to not use file cache (recommended if not on SSD)\n    the default is %temp% which means to use the temp directory\n    this example assumes running on Windows, for production, use an environment variable or just leave off for default of %temp%.\n    */\n    \"FileCacheDirectory\": \"c:/temp\",\n\n    /* optional, override the file cache cleanup threshold (0-100 percent). default is 15 */\n    \"FileCacheFreeSpaceThreshold\": 10,\n\n    /*\n    optional, override the default json-lz4 serializer with your own class that implements DigitalRuby.SimpleCache.ISerializer\n    the serializer is used to convert objects to bytes for the file and redis caches\n    this should be an assembly qualified type name\n    */\n    \"SerializerType\": \"DigitalRuby.SimpleCache.JsonSerializer, DigitalRuby.SimpleCache\"\n  }\n}\n\n```\n\nIf the `RedisConnectionString` is empty, no redis cache will be used, an no key change notifications will be sent, preventing auto purge of cache values that are modified.\n\nFor production usage, you should load this from an environment variable.\n\n## Usage\n\nYou can inject the following interface into your constructors to use the layered cache:\n\n```cs\n/// \u003csummary\u003e\n/// Layered cache interface. A layered cache aggregates multiple caches, such as memory, file and distributed cache (redis, etc.).\u003cbr/\u003e\n/// Internally, keys are prefixed with the entry assembyly name and the type full name. You can change the entry assembly by specifying a KeyPrefix in the configuration.\u003cbr/\u003e\n/// \u003c/summary\u003e\npublic interface ILayeredCache : IDisposable\n{\n    /// \u003csummary\u003e\n    /// Get or create an item from the cache.\n    /// \u003c/summary\u003e\n    /// \u003ctypeparam name=\"T\"\u003eType of item\u003c/typeparam\u003e\n    /// \u003cparam name=\"key\"\u003eCache key\u003c/param\u003e\n    /// \u003cparam name=\"factory\"\u003eFactory method to create the item if no item is in the cache for the key. This factory is guaranteed to execute only one per key.\u003cbr/\u003e\n    /// Inside your factory, you should set the CacheParameters on the GetOrCreateAsyncContext to a duration and size tuple: (TimeSpan duration, int size)\u003c/param\u003e\n    /// \u003cparam name=\"cancelToken\"\u003eCancel token\u003c/param\u003e\n    /// \u003creturns\u003eTask of return of type T\u003c/returns\u003e\n    Task\u003cT\u003e GetOrCreateAsync\u003cT\u003e(string key, Func\u003cGetOrCreateAsyncContext, Task\u003cT\u003e\u003e factory, CancellationToken cancelToken = default);\n\n    /// \u003csummary\u003e\n    /// Attempts to retrieve value of T by key.\n    /// \u003c/summary\u003e\n    /// \u003ctypeparam name=\"T\"\u003eType of object to get\u003c/typeparam\u003e\n    /// \u003cparam name=\"key\"\u003eCache key\u003c/param\u003e\n    /// \u003cparam name=\"cancelToken\"\u003eCancel token\u003c/param\u003e\n    /// \u003creturns\u003eResult of type T or null if nothing found for the key\u003c/returns\u003e\n    Task\u003cT?\u003e GetAsync\u003cT\u003e(string key, CancellationToken cancelToken = default);\n\n    /// \u003csummary\u003e\n    /// Sets value T by key.\n    /// \u003c/summary\u003e\n    /// \u003ctypeparam name=\"T\"\u003eType of object\u003c/typeparam\u003e\n    /// \u003cparam name=\"key\"\u003eCache key to set\u003c/param\u003e\n    /// \u003cparam name=\"value\"\u003eValue to set\u003c/param\u003e\n    /// \u003cparam name=\"cacheParam\"\u003eCache parameters\u003c/param\u003e\n    /// \u003cparam name=\"cancelToken\"\u003eCancel token\u003c/param\u003e\n    /// \u003creturns\u003eTask\u003c/returns\u003e\n    Task SetAsync\u003cT\u003e(string key, T value, CacheParameters cacheParam, CancellationToken cancelToken = default);\n\n    /// \u003csummary\u003e\n    /// Attempts to delete an entry of T type by key. If there is no key found, nothing happens.\n    /// \u003c/summary\u003e\n    /// \u003ctypeparam name=\"T\"\u003eThe type of object to delete\u003c/typeparam\u003e\n    /// \u003cparam name=\"key\"\u003eThe key to delete\u003c/param\u003e\n    /// \u003cparam name=\"cancelToken\"\u003eCancel token\u003c/param\u003e\n    /// \u003creturns\u003eTask\u003c/returns\u003e\n    Task DeleteAsync\u003cT\u003e(string key, CancellationToken cancelToken = default);\n}\n```\n\n**IMPORTANT**  \nDo not recursively call cache methods. A cache call should not make other caching calls inside of the factory method.\n\nYour cache key will be modified by the type parameter, `\u003cT\u003e`. This means you can have duplicate `key` parameters for different types.\n\nCache keys are also prefixed by the entry assembly name by default. This can be changed in the configuration.\n\nThe `CacheParameters` struct can be simplified by just passing a `TimeSpan` if you don't know the size. You can also pass a tuple of `(TimeSpan, int)` for a duration, size pair.\n\nIf you do know the approximate size of your object, you should specify the size to assist the memory compaction background task to be more accurate.\n\n`GetOrCreateAsync` example:\n\n```cs\nvar result = await cache.GetOrCreateAsync\u003cstring\u003e(key, duration, async context =\u003e\n{\n    // if you need the key, you can use context.Key to avoid capturing the key parameter, saving performance\n    var value = await MyExpensiveFunctionThatReturnsAStringAsync();\n\n    // set the cache duration and size, this is an important step to not miss\n    // the tuple is minutes, size\n    context.CacheParameters = (0.5, value.Length * 2);\n\n    // you can also set individually\n    context.Duration = TimeSpan.FromMinutes(0.5);\n    context.Size = value.Length * 2;\n\n    // the context also has a CancelToken property if you need it\n\n    return value;\n}, stoppingToken);\n```\n\n## Serialization\n\nThe configuration options mention a serializer. The default serializer is a json-lz4 serializer that gives a balance of ease of use, performance and smaller cache value sizes.\n\nYou can create your own serializer if desired, or use the json serializer that does not compress, as is shown in the configuration example.\n\nWhen implementing your own serializer, inherit and complete the following interface:\n\n```cs\n/// \u003csummary\u003e\n/// Interface for serializing cache objects to/from bytes\n/// \u003c/summary\u003e\npublic interface ISerializer\n{\n    /// \u003csummary\u003e\n    /// Deserialize\n    /// \u003c/summary\u003e\n    /// \u003cparam name=\"bytes\"\u003eBytes to deserialize\u003c/param\u003e\n    /// \u003cparam name=\"type\"\u003eType of object to deserialize to\u003c/param\u003e\n    /// \u003creturns\u003eDeserialized object or null if bytes is null or empty\u003c/returns\u003e\n    object? Deserialize(byte[]? bytes, Type type);\n\n    /// \u003csummary\u003e\n    /// Deserialize using generic type parameter\n    /// \u003c/summary\u003e\n    /// \u003ctypeparam name=\"T\"\u003eType of object to deserialize\u003c/typeparam\u003e\n    /// \u003cparam name=\"bytes\"\u003eBytes\u003c/param\u003e\n    /// \u003creturns\u003eDeserialized object or null if bytes is null or empty\u003c/returns\u003e\n    T? Deserialize\u003cT\u003e(byte[]? bytes) =\u003e (T?)Deserialize(bytes, typeof(T));\n\n    /// \u003csummary\u003e\n    /// Serialize an object\n    /// \u003c/summary\u003e\n    /// \u003cparam name=\"obj\"\u003eObject to serialize\u003c/param\u003e\n    /// \u003creturns\u003eSerialized bytes or null if obj is null\u003c/returns\u003e\n    byte[]? Serialize(object? obj);\n\n    /// \u003csummary\u003e\n    /// Serialize using generic type parameter\n    /// \u003c/summary\u003e\n    /// \u003ctypeparam name=\"T\"\u003eType of object\u003c/typeparam\u003e\n    /// \u003cparam name=\"obj\"\u003eObject to serialize\u003c/param\u003e\n    /// \u003creturns\u003eSerialized bytes or null if obj is null\u003c/returns\u003e\n    byte[]? Serialize\u003cT\u003e(T? obj) =\u003e Serialize(obj);\n\n    /// \u003csummary\u003e\n    /// Get a short description for the serializer, i.e. json or json-lz4.\n    /// \u003c/summary\u003e\n    string Description { get; }\n}\n```\n\n## Layers\n\nSimple cache uses layers, just like a modern CPU. Modern CPU's have multiple layers of cache just like simple cache.\n\nUsing multiple layers allows ever increasing amounts of data to be stored at slightly slower retrieval times.\n\n### Memory cache\n\nThe first layer (L1), the memory cache portion of simple cache uses IMemoryCache. This will be registered for you automatically in the services collection.\n\n.NET will compact the memory cache based on your settings from the configuration.\n\n### File cache\n\nThe second layer (L2), the file cache portion of simple cache uses the temp directory by default. You can override this.\n\nKeys are hashed using Blake2B and converted to base64.\n\nA background file cleanup task runs to ensure you do not overrun disk space.\n\nIf you are not running on an SSD, it is recommended to disable the file cache by specifying an empty string for the file cache directory.\n\n### Redis cache\n\nThe third and final layer, the redis cache uses StackExchange.Redis nuget package.\n\nThe redis layer detects when there is a failover and failback in a cluster and handles this gracefully.\n\nKeyspace notifications are sent to keep cache in sync between machines. Run `CONFIG SET notify-keyspace-events KEA` on your redis servers for this to take effect. Simple cache will attempt to do this as well.\n\nSometimes you need to purge your entire cache, do this with caution. To cause simple cache to clear memory and file caches, set a redis key that equals `__flushall__` with any value, then wait a second then execute a `FLUSHALL` or `FLUSHDB` command.\n\nAs a bonus, a distributed lock factory is provided to acquire locks that need to be synchronized accross machines.\n\nYou can inject this interface into your constructors for distributed locking:\n\n```cs\n/// \u003csummary\u003e\n/// Interface for distributed locks\n/// \u003c/summary\u003e\npublic interface IDistributedLockFactory\n{\n\t/// \u003csummary\u003e\n\t/// Attempt to acquire a distributed lock\n\t/// \u003c/summary\u003e\n\t/// \u003cparam name=\"key\"\u003eLock key\u003c/param\u003e\n\t/// \u003cparam name=\"lockTime\"\u003eDuration to hold the lock before it auto-expires. Set this to the maximum possible duration you think your code might hold the lock.\u003c/param\u003e\n\t/// \u003cparam name=\"timeout\"\u003eTime out to acquire the lock or default to only make one attempt to acquire the lock\u003c/param\u003e\n\t/// \u003creturns\u003eThe lock or null if the lock could not be acquired\u003c/returns\u003e\n\tTask\u003cIAsyncDisposable?\u003e TryAcquireLockAsync(string key, TimeSpan lockTime, TimeSpan timeout = default);\n}\n```\n\n## ISystemClock\nSimple cache uses TimeProvider as of version 2.0.0.\n\n## Exceptions and null\n\nSimple cache does not cache exceptions and does not cache null. If you must cache these types of objects, please wrap them in an object that can go in the cache.\n\n---\n\nThanks for reading!\n\n-- Jeff\n\nhttps://www.digitalruby.com\n","funding_links":["https://github.com/sponsors/jjxtra"],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdigitalruby%2Fsimplecache","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdigitalruby%2Fsimplecache","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdigitalruby%2Fsimplecache/lists"}