{"id":13808705,"url":"https://github.com/MikaAK/request_cache_plug","last_synced_at":"2025-05-14T03:31:21.570Z","repository":{"id":37788259,"uuid":"474760223","full_name":"MikaAK/request_cache_plug","owner":"MikaAK","description":"Request caching for Phoenix \u0026 Absinthe (GraphQL), short circuiting even the JSON decoding/encoding ","archived":false,"fork":false,"pushed_at":"2024-08-23T05:49:41.000Z","size":144,"stargazers_count":24,"open_issues_count":1,"forks_count":5,"subscribers_count":3,"default_branch":"main","last_synced_at":"2024-10-29T08:40:00.899Z","etag":null,"topics":["cache","elixir","graphql","plug"],"latest_commit_sha":null,"homepage":"https://learn-elixir.dev/blogs/using-caching-to-speed-up-large-data-returns-by-1000x","language":"Elixir","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/MikaAK.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2022-03-27T21:08:04.000Z","updated_at":"2024-10-20T15:45:55.000Z","dependencies_parsed_at":"2023-02-08T07:30:30.581Z","dependency_job_id":"f4a0087f-128e-40fa-9e4d-ad242863288d","html_url":"https://github.com/MikaAK/request_cache_plug","commit_stats":{"total_commits":67,"total_committers":4,"mean_commits":16.75,"dds":"0.20895522388059706","last_synced_commit":"79d8cb5cf4083de87d546f911ec553d02bff178b"},"previous_names":[],"tags_count":21,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/MikaAK%2Frequest_cache_plug","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/MikaAK%2Frequest_cache_plug/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/MikaAK%2Frequest_cache_plug/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/MikaAK%2Frequest_cache_plug/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/MikaAK","download_url":"https://codeload.github.com/MikaAK/request_cache_plug/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":225273262,"owners_count":17448074,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["cache","elixir","graphql","plug"],"created_at":"2024-08-04T01:01:49.840Z","updated_at":"2024-11-19T00:31:01.466Z","avatar_url":"https://github.com/MikaAK.png","language":"Elixir","readme":"## RequestCache\n\n[![Test](https://github.com/MikaAK/request_cache_plug/actions/workflows/test-actions.yml/badge.svg)](https://github.com/MikaAK/request_cache_plug/actions/workflows/test-actions.yml)\n[![codecov](https://codecov.io/gh/MikaAK/request_cache_plug/branch/main/graph/badge.svg?token=RF4ASVG5PV)](https://codecov.io/gh/MikaAK/request_cache_plug)\n[![Hex version badge](https://img.shields.io/hexpm/v/request_cache_plug.svg)](https://hex.pm/packages/request_cache_plug)\n\nThis plug allows us to cache our graphql queries and phoenix controller requests declaritevly\n\nWe call the cache inside either a resolver or a controller action and this will store it preventing further\nexecutions of our query on repeat requests.\n\nThe goal of this plug is to short-circuit any processing phoenix would\nnormally do upon request including json decoding/parsing, the only step that should run is telemetry\n\n### Installation\n\nThis  package can be installed by adding `request_cache_plug` to your list of dependencies in `mix.exs`:\n\n```elixir\ndef deps do\n  [\n    {:request_cache_plug, \"~\u003e 1.0\"}\n  ]\nend\n```\n\nDocumentation can be found at \u003chttps://hexdocs.pm/request_cache_plug\u003e.\n\n### Config\nThis is the default config, it can all be changed but without any configuration setup this will be used:\n```elixir\nconfig :request_cache_plug,\n  enabled?: true,\n  verbose?: false,\n  graphql_paths: [\"/graphiql\", \"/graphql\"],\n  conn_priv_key: :__shared_request_cache__,\n  request_cache_module: RequestCache.ConCacheStore,\n  default_ttl: :timer.hours(1),\n  default_concache_opts: [\n    ttl_check_interval: :timer.seconds(1),\n    acquire_lock_timeout: :timer.seconds(1),\n    ets_options: [write_concurrency: true, read_concurrency: true]\n  ]\n```\n\n### Usage\nThis plug is intended to be inserted into the `endpoint.ex` fairly early in the pipeline,\nit should go after telemetry but before our parsers\n\n```elixir\nplug Plug.Telemetry, event_prefix: [:phoenix, :endpoint]\n\nplug RequestCache.Plug\n\nplug Plug.Parsers,\n  parsers: [:urlencoded, :multipart, :json],\n  pass: [\"*/*\"]\n```\n\nWe also need to setup a before_send hook to our absinthe_plug (if not using absinthe you can skip this step)\n```elixir\nplug Absinthe.Plug, before_send: {RequestCache, :connect_absinthe_context_to_conn}\n```\nWhat this does is allow us to see the results of items we put onto our request context from within plugs coming after absinthe\n\nAfter that we can utilize our cache in a few ways\n\n#### Utilization with Phoenix Controllers\n```elixir\ndef index(conn, params) do\n  conn\n    |\u003e RequestCache.store(:timer.seconds(60))\n    |\u003e put_status(200)\n    |\u003e json(%{...})\nend\n```\n\n#### Utilization with Absinthe Resolvers\n```elixir\ndef all(params, _resolution) do\n  # Instead of returning {:ok, value} we return this\n  RequestCache.store(value, :timer.seconds(60))\nend\n```\n\n#### Utilization with Absinthe Middleware\n```elixir\nfield :user, :user do\n  arg :id, non_null(:id)\n\n  middleware RequestCache.Middleware, ttl: :timer.seconds(60)\n\n  resolve \u0026Resolvers.User.find/2\nend\n```\n\n### Specifying Specific Caching Locations\nWe have a few ways to control the caching location of our RequestCache, by default if you have `con_cache` installed,\nwe have access to `RequestCache.ConCacheStore` which is the default setting\nHowever we can override this by setting `config :request_cache_plug, :request_cache_module, MyCustomCache`\n\nCaching module will be expected to have the following API:\n```elixir\ndef get(key) do\n  ...\nend\n\ndef put(key, ttl, value) do\n  ...\nend\n```\n\nYou are responsible for starting the cache, including ConCacheStore, so if you're planning to use it make sure\nyou add `RequestCache.ConCacheStore` to the application.ex list of children\n\nWe can also override the module for a particular request by passing the option to our graphql middleware or\nour `\u0026RequestCache.store/2` function as `[ttl: 123, cache: MyCacheModule]`\n\n##### With Middleware\n\n```elixir\nfield :user, :user do\n  arg :id, non_null(:id)\n\n  middleware RequestCache.Middleware, ttl: :timer.seconds(60), cache: MyCacheModule\n\n  resolve \u0026Resolvers.User.find/2\nend\n```\n\n##### In a Resolver\n\n```elixir\ndef all(params, resolution) do\n  RequestCache.store(value, ttl: :timer.seconds(60), cache: MyCacheModule)\nend\n```\n\n##### In a Controller\n\n```elixir\ndef index(conn, params) do\n  RequestCache.store(conn, ttl: :timer.seconds(60), cache: MyCacheModule)\nend\n```\n\n### telemetry\n\nCache events are emitted via :telemetry. Events are:\n\n- `[:request_cache_plug, :graphql, :cache_hit]`\n- `[:request_cache_plug, :graphql, :cache_miss]`\n- `[:request_cache_plug, :rest, :cache_hit]`\n- `[:request_cache_plug, :rest, :cache_miss]`\n- `[:request_cache_plug, :cache_put]`\n\nFor GraphQL endpoints it is possible to provide a list of atoms that will be passed through to the event metadata; e.g.:\n\n##### With Middleware\n\n```elixir\nfield :user, :user do\n  arg :id, non_null(:id)\n\n  middleware RequestCache.Middleware,\n    ttl: :timer.seconds(60),\n    cache: MyCacheModule,\n    labels: [:service, :endpoint],\n    whitelisted_query_names: [\"MyQueryName\"] # By default all queries are cached, can also whitelist based off query name from GQL Document\n\n  resolve \u0026Resolvers.User.find/2\nend\n```\n\n##### In a Resolver\n\n```elixir\ndef all(params, resolution) do\n  RequestCache.store(value, ttl: :timer.seconds(60), cache: MyCacheModule, labels: [:service, :endpoint])\nend\n```\n\nThe events will look like this:\n\n```elixir\n{\n  [:request_cache_plug, :graphql, :cache_hit],\n  %{count: 1},\n  %{ttl: 3600000, cache_key: \"/graphql:NNNN\", labels: [:service, :endpoint]}\n}\n```\n\n##### Enable Error Caching\nIn order to enable error caching we can either setup `cached_errors` in our config\nor as an option to `RequestCache.store` or `RequestCache.Middleware`.\n\nThe value of `cached_errors` can be one of `[]`, `:all` or a list of reason_atoms as\ndefined by `Plug.Conn.Status` such as `:not_found`, or `:internal_server_error`.\n\nIn REST this works off the response codes returned. However, in order to use reason_atoms in GraphQL\nyou will need to make sure your errors contain some sort of `%{code: \"not_found\"}` response in them\n\nTake a look at [error_message](https://github.com/MikaAK/elixir_error_message) for a compatible error system\n\n\n### Notes/Gotchas\n- In order for this caching to work, we cannot be using POST requests as specced out by GraphQL, not for queries at least, fortunately this doesn't actually matter since we can use any http method we want (there will be a limit to query size), in a production app you may be doing this already due to the caching you gain from CloudFlare\n- Caches are stored via a MD5 hashed key that correlates to your query in GraphQL, or in REST your url path + query parameters\n- Absinthe and ConCache are optional dependencies, if you don't have them you won't have access to `RequestCache.Middleware` or `RequestCache.ConCacheStore`\n- If no ConCache is found, you must set `config :request_cache_module` to something else\n\n### Caching Header\nWhen an item is served from the cache, we return a header `rc-cache-status` which has a value of `HIT`. Using this you can tell if the item was\nserved out of cache, without it the item was fetched.\nWe can also invalidate specific items out of the cache, by using the `rc-cache-key` header which returns the key being used for the cache\n\n### Example Reduction\nIn the case of a large (16mb) payload running through absinthe, this plug cuts down response times from 400+ms -\u003e \u003c400μs\n\n\n\u003cimg width=\"704\" alt=\"image\" src=\"https://user-images.githubusercontent.com/4650931/161464277-713e994b-1246-43ac-82a1-fb2442cd7bce.png\"\u003e\n","funding_links":[],"categories":["Caching"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FMikaAK%2Frequest_cache_plug","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FMikaAK%2Frequest_cache_plug","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FMikaAK%2Frequest_cache_plug/lists"}