{"id":16513959,"url":"https://github.com/twin/gocache","last_synced_at":"2025-12-15T14:29:20.970Z","repository":{"id":41996095,"uuid":"276264170","full_name":"TwiN/gocache","owner":"TwiN","description":"High performance and lightweight in-memory cache library with LRU and FIFO support as well as memory-usage-based-eviction","archived":false,"fork":false,"pushed_at":"2025-04-16T02:59:38.000Z","size":1231,"stargazers_count":40,"open_issues_count":0,"forks_count":5,"subscribers_count":5,"default_branch":"master","last_synced_at":"2025-04-16T03:53:36.914Z","etag":null,"topics":["cache","caching","expiration","expire","fifo","fifo-cache","go","go-cache","golang","in-memory","inmemory","inmemory-cache","key-value","kvstore","lru","lru-cache","memory-usage","memory-usage-based-eviction","ttl"],"latest_commit_sha":null,"homepage":"","language":"Go","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/TwiN.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null},"funding":{"github":["TwiN"]}},"created_at":"2020-07-01T03:08:36.000Z","updated_at":"2025-04-16T02:59:36.000Z","dependencies_parsed_at":"2023-12-11T00:24:09.008Z","dependency_job_id":"19275431-01b9-461e-913e-cb2d4f540df3","html_url":"https://github.com/TwiN/gocache","commit_stats":null,"previous_names":["twinproduction/gocache"],"tags_count":29,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/TwiN%2Fgocache","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/TwiN%2Fgocache/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/TwiN%2Fgocache/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/TwiN%2Fgocache/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/TwiN","download_url":"https://codeload.github.com/TwiN/gocache/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":253141394,"owners_count":21860536,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["cache","caching","expiration","expire","fifo","fifo-cache","go","go-cache","golang","in-memory","inmemory","inmemory-cache","key-value","kvstore","lru","lru-cache","memory-usage","memory-usage-based-eviction","ttl"],"created_at":"2024-10-11T16:10:58.794Z","updated_at":"2025-12-15T14:29:20.503Z","avatar_url":"https://github.com/TwiN.png","language":"Go","readme":"# gocache\n![test](https://github.com/TwiN/gocache/workflows/test/badge.svg?branch=master) \n[![Go Report Card](https://goreportcard.com/badge/github.com/TwiN/gocache)](https://goreportcard.com/report/github.com/TwiN/gocache)\n[![codecov](https://codecov.io/gh/TwiN/gocache/branch/master/graph/badge.svg)](https://codecov.io/gh/TwiN/gocache)\n[![Go version](https://img.shields.io/github/go-mod/go-version/TwiN/gocache.svg)](https://github.com/TwiN/gocache)\n[![Go Reference](https://pkg.go.dev/badge/github.com/TwiN/gocache.svg)](https://pkg.go.dev/github.com/TwiN/gocache/v2)\n[![Follow TwiN](https://img.shields.io/github/followers/TwiN?label=Follow\u0026style=social)](https://github.com/TwiN)\n\ngocache is an easy-to-use, high-performance, lightweight and thread-safe (goroutine-safe) in-memory key-value cache \nwith support for LRU and FIFO eviction policies as well as expiration, bulk operations and even retrieval of keys by pattern.\n\n\n## Table of Contents\n\n- [Features](#features)\n- [Usage](#usage)\n  - [Initializing the cache](#initializing-the-cache)\n  - [Functions](#functions)\n  - [Examples](#examples)\n    - [Creating or updating an entry](#creating-or-updating-an-entry)\n    - [Getting an entry](#getting-an-entry)\n    - [Deleting an entry](#deleting-an-entry)\n    - [Complex example](#complex-example)\n- [Persistence](#persistence)\n- [Eviction](#eviction)\n  - [MaxSize](#maxsize)\n  - [MaxMemoryUsage](#maxmemoryusage)\n- [Expiration](#expiration)\n- [Performance](#performance)\n  - [Summary](#summary)\n  - [Results](#results)\n- [FAQ](#faq)\n  - [How can I persist the data on application termination?](#how-can-i-persist-the-data-on-application-termination)\n\n\n## Features\ngocache supports the following cache eviction policies: \n- First in first out (FIFO)\n- Least recently used (LRU)\n\nIt also supports cache entry TTL, which is both active and passive. Active expiration means that if you attempt \nto retrieve a cache key that has already expired, it will delete it on the spot and the behavior will be as if\nthe cache key didn't exist. As for passive expiration, there's a background task that will take care of deleting\nexpired keys.\n\nIt also includes what you'd expect from a cache, like GET/SET, bulk operations and get by pattern.\n\n\n## Usage\n```\ngo get -u github.com/TwiN/gocache/v2\n```\n\n\n### Initializing the cache\n```go\ncache := gocache.NewCache().WithMaxSize(1000).WithEvictionPolicy(gocache.LeastRecentlyUsed)\n```\n\nIf you're planning on using expiration (`SetWithTTL` or `Expire`) and you want expired entries to be automatically deleted \nin the background, make sure to start the janitor when you instantiate the cache:\n\n```go\ncache.StartJanitor()\n```\n\n### Functions\n| Function                          | Description                                                                                                                                                                                                                                                        |\n|-----------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| WithMaxSize                       | Sets the max size of the cache. `gocache.NoMaxSize` means there is no limit. If not set, the default max size is `gocache.DefaultMaxSize`.                                                                                                                         |\n| WithMaxMemoryUsage                | Sets the max memory usage of the cache. `gocache.NoMaxMemoryUsage` means there is no limit. The default behavior is to not evict based on memory usage.                                                                                                            |\n| WithEvictionPolicy                | Sets the eviction algorithm to be used when the cache reaches the max size. If not set, the default eviction policy is `gocache.FirstInFirstOut` (FIFO).                                                                                                           |\n| WithDefaultTTL                    | Sets the default TTL for each entry.                                                                                                                                                                                                                               |\n| WithForceNilInterfaceOnNilPointer | Configures whether values with a nil pointer passed to write functions should be forcefully set to nil. Defaults to true.                                                                                                                                          |\n| StartJanitor                      | Starts the janitor, which is in charge of deleting expired cache entries in the background.                                                                                                                                                                        |\n| StopJanitor                       | Stops the janitor.                                                                                                                                                                                                                                                 |\n| Set                               | Same as `SetWithTTL`, but using the default TTL (which is `gocache.NoExpiration`, unless configured otherwise).                                                                                                                                                    |\n| SetWithTTL                        | Creates or updates a cache entry with the given key, value and expiration time. If the max size after the aforementioned operation is above the configured max size, the tail will be evicted. Depending on the eviction policy, the tail is defined as the oldest |\n| SetAll                            | Same as `Set`, but in bulk.                                                                                                                                                                                                                                        |\n| SetAllWithTTL                     | Same as `SetWithTTL`, but in bulk.                                                                                                                                                                                                                                 |\n| Get                               | Gets a cache entry by its key.                                                                                                                                                                                                                                     |\n| GetByKeys                         | Gets a map of entries by their keys. The resulting map will contain all keys, even if some of the keys in the slice passed as parameter were not present in the cache.                                                                                             |\n| GetAll                            | Gets all cache entries.                                                                                                                                                                                                                                            |\n| GetKeysByPattern                  | Retrieves a slice of keys that matches a given pattern.                                                                                                                                                                                                            |\n| Delete                            | Removes a key from the cache.                                                                                                                                                                                                                                      |\n| DeleteAll                         | Removes multiple keys from the cache.                                                                                                                                                                                                                              |\n| DeleteKeysByPattern               | Removes all keys that that matches a given pattern.                                                                                                                                                                                                                |\n| Count                             | Gets the size of the cache. This includes cache keys which may have already expired, but have not been removed yet.                                                                                                                                                |\n| Clear                             | Wipes the cache.                                                                                                                                                                                                                                                   |\n| TTL                               | Gets the time until a cache key expires.                                                                                                                                                                                                                           |\n| Expire                            | Sets the expiration time of an existing cache key.                                                                                                                                                                                                                 |\n\nFor further documentation, please refer to [Go Reference](https://pkg.go.dev/github.com/TwiN/gocache)\n\n\n### Examples\n\n#### Creating or updating an entry\n```go\ncache.Set(\"key\", \"value\") \ncache.Set(\"key\", 1)\ncache.Set(\"key\", struct{ Text string }{Test: \"value\"})\ncache.SetWithTTL(\"key\", []byte(\"value\"), 24*time.Hour)\n```\n\n#### Getting an entry\n```go\nvalue, exists := cache.Get(\"key\")\n```\nYou can also get multiple entries by using `cache.GetByKeys([]string{\"key1\", \"key2\"})`\n\n#### Deleting an entry\n```go\ncache.Delete(\"key\")\n```\nYou can also delete multiple entries by using `cache.DeleteAll([]string{\"key1\", \"key2\"})`\n\n#### Complex example\n```go\npackage main\n\nimport (\n    \"fmt\"\n    \"time\"\n\n    \"github.com/TwiN/gocache/v2\"\n)\n\nfunc main() {\n    cache := gocache.NewCache().WithEvictionPolicy(gocache.LeastRecentlyUsed).WithMaxSize(10000)\n    cache.StartJanitor() // Passively manages expired entries\n    defer cache.StopJanitor()\n\n    cache.Set(\"key\", \"value\")\n    cache.SetWithTTL(\"key-with-ttl\", \"value\", 60*time.Minute)\n    cache.SetAll(map[string]any{\"k1\": \"v1\", \"k2\": \"v2\", \"k3\": \"v3\"})\n\n    fmt.Println(\"[Count] Cache size:\", cache.Count())\n\n    value, exists := cache.Get(\"key\")\n    fmt.Printf(\"[Get] key=key; value=%s; exists=%v\\n\", value, exists)\n    for key, value := range cache.GetByKeys([]string{\"k1\", \"k2\", \"k3\"}) {\n        fmt.Printf(\"[GetByKeys] key=%s; value=%s\\n\", key, value)\n    }\n    for _, key := range cache.GetKeysByPattern(\"key*\", 0) {\n        fmt.Printf(\"[GetKeysByPattern] pattern=key*; key=%s\\n\", key)\n    }\n\n    cache.Expire(\"key\", time.Hour)\n    time.Sleep(500*time.Millisecond)\n    timeUntilExpiration, _ := cache.TTL(\"key\")\n    fmt.Println(\"[TTL] Number of minutes before 'key' expires:\", int(timeUntilExpiration.Seconds()))\n\n    cache.Delete(\"key\")\n    cache.DeleteAll([]string{\"k1\", \"k2\", \"k3\"})\n    \n    cache.Clear()\n    fmt.Println(\"[Count] Cache size after clearing the cache:\", cache.Count())\n}\n```\n\n\u003cdetails\u003e\n  \u003csummary\u003eOutput\u003c/summary\u003e\n\n```\n[Count] Cache size: 5\n[Get] key=key; value=value; exists=true\n[GetByKeys] key=k1; value=v1\n[GetByKeys] key=k2; value=v2\n[GetByKeys] key=k3; value=v3\n[GetKeysByPattern] pattern=key*; key=key-with-ttl\n[GetKeysByPattern] pattern=key*; key=key\n[TTL] Number of minutes before 'key' expires: 3599\n[Count] Cache size after clearing the cache: 0\n```\n\u003c/details\u003e\n\n\n## Persistence\nPrior to v2, gocache supported persistence out of the box.\n\nAfter some thinking, I decided that persistence added too many dependencies, and given than this is a cache library\nand most people wouldn't be interested in persistence, I decided to get rid of it.\n\nThat being said, you can use the `GetAll` and `SetAll` methods of `gocache.Cache` to implement persistence yourself.\n\n\n## Eviction\n### MaxSize\nEviction by MaxSize is the default behavior, and is also the most efficient.\n\nThe code below will create a cache that has a maximum size of 1000:\n```go\ncache := gocache.NewCache().WithMaxSize(1000)\n```\nThis means that whenever an operation causes the total size of the cache to go above 1000, the tail will be evicted.\n\n### MaxMemoryUsage\nEviction by MaxMemoryUsage is **disabled by default**, and is in alpha.\n\nThe code below will create a cache that has a maximum memory usage of 50MB:\n```go\ncache := gocache.NewCache().WithMaxSize(0).WithMaxMemoryUsage(50*gocache.Megabyte)\n```\nThis means that whenever an operation causes the total memory usage of the cache to go above 50MB, one or more tails\nwill be evicted.\n\nUnlike evictions caused by reaching the MaxSize, evictions triggered by MaxMemoryUsage may lead to multiple entries\nbeing evicted in a row. The reason for this is that if, for instance, you had 100 entries of 0.1MB each and you suddenly added \na single entry of 10MB, 100 entries would need to be evicted to make enough space for that new big entry.\n\nIt's very important to keep in mind that eviction by MaxMemoryUsage is approximate.\n\n**The only memory taken into consideration is the size of the cache, not the size of the entire application.**\nIf you pass along 100MB worth of data in a matter of seconds, even though the cache's memory usage will remain\nunder 50MB (or whatever you configure the MaxMemoryUsage to), the memory footprint generated by that 100MB will \nstill exist until the next GC cycle.\n\nAs previously mentioned, this is a work in progress, and here's a list of the things you should keep in mind:\n- The memory usage of structs are a gross estimation and may not reflect the actual memory usage.\n- Native types (string, int, bool, []byte, etc.) are the most accurate for calculating the memory usage.\n- Adding an entry bigger than the configured MaxMemoryUsage will work, but it will evict all other entries.\n\n\n## Expiration\nThere are two ways that the deletion of expired keys can take place:\n- Active\n- Passive\n\n**Active deletion of expired keys** happens when an attempt is made to access the value of a cache entry that expired. \n`Get`, `GetByKeys` and `GetAll` are the only functions that can trigger active deletion of expired keys.\n\n**Passive deletion of expired keys** runs in the background and is managed by the janitor. \nIf you do not start the janitor, there will be no passive deletion of expired keys.\n\n\n## Performance\n### Summary\n- **Set**: Both map and gocache have the same performance.\n- **Get**: Map is faster than gocache.\n\nThis is because gocache keeps track of the head and the tail for eviction and expiration/TTL. \n\nUltimately, the difference is negligible. \n\nWe could add a way to disable eviction or disable expiration altogether just to match the map's performance, \nbut if you're looking into using a library like gocache, odds are, you want more than just a map.\n\n\n### Results\n| key    | value    |\n|:-------|:---------|\n| goos   | windows  |\n| goarch | amd64    |\n| cpu    | i7-9700K |\n| mem    | 32G DDR4 |\n\n```\n// Normal map\nBenchmarkMap_Get-8                                                              49944228     24.2 ns/op      7 B/op   0 allocs/op\nBenchmarkMap_Set/small_value-8                                                   3939964    394.1 ns/op    188 B/op   2 allocs/op\nBenchmarkMap_Set/medium_value-8                                                  3868586    395.5 ns/op    191 B/op   2 allocs/op\nBenchmarkMap_Set/large_value-8                                                   3992138    385.3 ns/op    186 B/op   2 allocs/op\n// Gocache                                                                                               \nBenchmarkCache_Get/FirstInFirstOut-8                                            27907950     44.3 ns/op     7 B/op    0 allocs/op\nBenchmarkCache_Get/LeastRecentlyUsed-8                                          28211396     44.2 ns/op     7 B/op    0 allocs/op\nBenchmarkCache_Set/FirstInFirstOut_small_value-8                                 3139538    373.5 ns/op    185 B/op   3 allocs/op\nBenchmarkCache_Set/FirstInFirstOut_medium_value-8                                3099516    378.6 ns/op    186 B/op   3 allocs/op\nBenchmarkCache_Set/FirstInFirstOut_large_value-8                                 3086776    386.7 ns/op    186 B/op   3 allocs/op\nBenchmarkCache_Set/LeastRecentlyUsed_small_value-8                               3070555    379.0 ns/op    187 B/op   3 allocs/op\nBenchmarkCache_Set/LeastRecentlyUsed_medium_value-8                              3056928    383.8 ns/op    187 B/op   3 allocs/op\nBenchmarkCache_Set/LeastRecentlyUsed_large_value-8                               3108250    383.8 ns/op    186 B/op   3 allocs/op\nBenchmarkCache_SetUsingMaxMemoryUsage/medium_value-8                             2773315    449.0 ns/op    210 B/op   4 allocs/op\nBenchmarkCache_SetUsingMaxMemoryUsage/large_value-8                              2731818    440.0 ns/op    211 B/op   4 allocs/op\nBenchmarkCache_SetUsingMaxMemoryUsage/small_value-8                              2659296    446.8 ns/op    213 B/op   4 allocs/op\nBenchmarkCache_SetWithMaxSize/100_small_value-8                                  4848658    248.8 ns/op    114 B/op   3 allocs/op\nBenchmarkCache_SetWithMaxSize/10000_small_value-8                                4117632    293.7 ns/op    106 B/op   3 allocs/op\nBenchmarkCache_SetWithMaxSize/100000_small_value-8                               3867402    313.0 ns/op    110 B/op   3 allocs/op\nBenchmarkCache_SetWithMaxSize/100_medium_value-8                                 4750057    250.1 ns/op    113 B/op   3 allocs/op\nBenchmarkCache_SetWithMaxSize/10000_medium_value-8                               4143772    294.5 ns/op    106 B/op   3 allocs/op\nBenchmarkCache_SetWithMaxSize/100000_medium_value-8                              3768883    313.2 ns/op    111 B/op   3 allocs/op\nBenchmarkCache_SetWithMaxSize/100_large_value-8                                  4822646    251.1 ns/op    114 B/op   3 allocs/op\nBenchmarkCache_SetWithMaxSize/10000_large_value-8                                4154428    291.6 ns/op    106 B/op   3 allocs/op\nBenchmarkCache_SetWithMaxSize/100000_large_value-8                               3897358    313.7 ns/op    110 B/op   3 allocs/op\nBenchmarkCache_SetWithMaxSizeAndLRU/100_small_value-8                            4784180    254.2 ns/op    114 B/op   3 allocs/op\nBenchmarkCache_SetWithMaxSizeAndLRU/10000_small_value-8                          4067042    292.0 ns/op    106 B/op   3 allocs/op\nBenchmarkCache_SetWithMaxSizeAndLRU/100000_small_value-8                         3832760    313.8 ns/op    111 B/op   3 allocs/op\nBenchmarkCache_SetWithMaxSizeAndLRU/100_medium_value-8                           4846706    252.2 ns/op    114 B/op   3 allocs/op\nBenchmarkCache_SetWithMaxSizeAndLRU/10000_medium_value-8                         4103817    292.5 ns/op    106 B/op   3 allocs/op\nBenchmarkCache_SetWithMaxSizeAndLRU/100000_medium_value-8                        3845623    315.1 ns/op    111 B/op   3 allocs/op\nBenchmarkCache_SetWithMaxSizeAndLRU/100_large_value-8                            4744513    257.9 ns/op    114 B/op   3 allocs/op\nBenchmarkCache_SetWithMaxSizeAndLRU/10000_large_value-8                          3956316    299.5 ns/op    106 B/op   3 allocs/op\nBenchmarkCache_SetWithMaxSizeAndLRU/100000_large_value-8                         3876843    351.3 ns/op    110 B/op   3 allocs/op\nBenchmarkCache_GetSetMultipleConcurrent-8                                         750088   1566.0 ns/op    128 B/op   8 allocs/op\nBenchmarkCache_GetSetConcurrentWithFrequentEviction/FirstInFirstOut-8            3836961    316.2 ns/op     80 B/op   1 allocs/op\nBenchmarkCache_GetSetConcurrentWithFrequentEviction/LeastRecentlyUsed-8          3846165    315.6 ns/op     80 B/op   1 allocs/op\nBenchmarkCache_GetConcurrently/FirstInFirstOut-8                                 4830342    239.8 ns/op      8 B/op   1 allocs/op\nBenchmarkCache_GetConcurrently/LeastRecentlyUsed-8                               4895587    243.2 ns/op      8 B/op   1 allocs/op\n(Trimmed \"BenchmarkCache_\" for readability)                                                              \nWithForceNilInterfaceOnNilPointer/true_with_nil_struct_pointer-8                 6901461    178.5 ns/op      7 B/op   1 allocs/op\nWithForceNilInterfaceOnNilPointer/true-8                                         6629566    180.7 ns/op      7 B/op   1 allocs/op\nWithForceNilInterfaceOnNilPointer/false_with_nil_struct_pointer-8                6282798    170.1 ns/op      7 B/op   1 allocs/op\nWithForceNilInterfaceOnNilPointer/false-8                                        6741382    172.6 ns/op      7 B/op   1 allocs/op\nWithForceNilInterfaceOnNilPointerWithConcurrency/true_with_nil_struct_pointer-8  4432951    258.0 ns/op      8 B/op   1 allocs/op\nWithForceNilInterfaceOnNilPointerWithConcurrency/true-8                          4676943    244.4 ns/op      8 B/op   1 allocs/op\nWithForceNilInterfaceOnNilPointerWithConcurrency/false_with_nil_struct_pointer-8 4818418    239.6 ns/op      8 B/op   1 allocs/op\nWithForceNilInterfaceOnNilPointerWithConcurrency/false-8                         5025937    238.2 ns/op      8 B/op   1 allocs/op\n```\n\n\n## FAQ\n\n### How can I persist the data on application termination?\nWhile creating your own auto save feature might come in handy, it may still lead to loss of data if the application \nautomatically saves every 10 minutes and your application crashes 9 minutes after the previous save.\n\nTo increase your odds of not losing any data, you can use Go's `signal` package, more specifically its `Notify` function\nwhich allows listening for termination signals like SIGTERM and SIGINT. Once a termination signal is caught, you can\nadd the necessary logic for a graceful shutdown.\n\nIn the following example, the code that would usually be present in the `main` function is moved to a different function\nnamed `Start` which is launched on a different goroutine so that listening for a termination signals is what blocks the\nmain goroutine instead:\n```go\npackage main\n\nimport (\n    \"log\"\n    \"os\"\n    \"os/signal\"\n    \"syscall\"\n\n    \"github.com/TwiN/gocache/v2\"\n)\n\nvar cache = gocache.NewCache()\n\nfunc main() {\n    data := retrieveCacheEntriesUsingWhateverMeanYouUsedToPersistIt()\n    cache.SetAll(data)\n    // Start everything else on another goroutine to prevent blocking the main goroutine\n    go Start()\n    // Wait for termination signal\n    sig := make(chan os.Signal, 1)\n    done := make(chan bool, 1)\n    signal.Notify(sig, os.Interrupt, syscall.SIGTERM)\n    go func() {\n        \u003c-sig\n        log.Println(\"Received termination signal, attempting to gracefully shut down\")\n        // Persist the cache entries\n        cacheEntries := cache.GetAll()\n        persistCacheEntriesHoweverYouWant(cacheEntries)\n        // Tell the main goroutine that we're done\n        done \u003c- true\n    }()\n    \u003c-done\n    log.Println(\"Shutting down\")\n}\n```\n\nNote that this won't protect you from a SIGKILL, as this signal cannot be caught.\n","funding_links":["https://github.com/sponsors/TwiN"],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftwin%2Fgocache","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ftwin%2Fgocache","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftwin%2Fgocache/lists"}