{"id":13583993,"url":"https://github.com/ledgetech/ledge","last_synced_at":"2025-12-15T14:52:47.154Z","repository":{"id":52118790,"uuid":"1975130","full_name":"ledgetech/ledge","owner":"ledgetech","description":"An RFC compliant and ESI capable HTTP cache for Nginx / OpenResty, backed by Redis","archived":false,"fork":false,"pushed_at":"2021-05-07T08:33:05.000Z","size":3329,"stargazers_count":454,"open_issues_count":15,"forks_count":59,"subscribers_count":32,"default_branch":"master","last_synced_at":"2024-05-18T20:47:40.170Z","etag":null,"topics":["cache","edge","esi","http","lua","luajit","openresty","redis","redis-sentinel"],"latest_commit_sha":null,"homepage":"","language":"Lua","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/ledgetech.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":".github/FUNDING.yml","license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null},"funding":{"github":"pintsized"}},"created_at":"2011-06-29T22:11:19.000Z","updated_at":"2024-03-25T00:24:46.000Z","dependencies_parsed_at":"2022-09-08T06:42:11.605Z","dependency_job_id":null,"html_url":"https://github.com/ledgetech/ledge","commit_stats":null,"previous_names":["pintsized/ledge"],"tags_count":81,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ledgetech%2Fledge","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ledgetech%2Fledge/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ledgetech%2Fledge/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ledgetech%2Fledge/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/ledgetech","download_url":"https://codeload.github.com/ledgetech/ledge/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":223265120,"owners_count":17116295,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["cache","edge","esi","http","lua","luajit","openresty","redis","redis-sentinel"],"created_at":"2024-08-01T15:03:56.758Z","updated_at":"2025-12-15T14:52:42.099Z","avatar_url":"https://github.com/ledgetech.png","language":"Lua","readme":"# Ledge\n\n[![Build Status](https://travis-ci.org/ledgetech/ledge.svg?branch=master)](https://travis-ci.org/ledgetech/ledge)\n\nAn RFC compliant and [ESI](https://www.w3.org/TR/esi-lang) capable HTTP cache for [Nginx](http://nginx.org) / [OpenResty](https://openresty.org), backed by [Redis](http://redis.io).\n\nLedge can be utilised as a fast, robust and scalable alternative to Squid / Varnish etc, either installed standalone or integrated into an existing Nginx server or load balancer.\n\nMoreover, it is particularly suited to applications where the origin is expensive or distant, making it desirable to serve from cache as optimistically as possible.\n\n\n## Table of Contents\n\n* [Installation](#installation)\n* [Philosophy and Nomenclature](#philosophy-and-nomenclature)\n    * [Cache keys](#cache-keys)\n    * [Streaming design](#streaming-design)\n    * [Collapsed forwarding](#collapsed-forwarding)\n    * [Advanced cache patterns](#advanced-cache-patterns)\n* [Minimal configuration](#minimal-configuration)\n* [Config systems](#config-systems)\n* [Events system](#events-system)\n* [Caching basics](#caching-basics)\n* [Purging](#purging)\n* [Serving stale](#serving-stale)\n* [Edge Side Includes](#edge-side-includes)\n* [API](#api)\n    * [ledge.configure](#ledgeconfigure)\n    * [ledge.set_handler_defaults](#ledgeset_handler_defaults)\n    * [ledge.create\\_handler](#ledgecreate_handler)\n    * [ledge.create\\_worker](#ledgecreate_worker)\n    * [ledge.bind](#ledgebind)\n    * [handler.bind](#handlerbind)\n    * [handler.run](#handlerrun)\n    * [worker.run](#workerrun)\n* [Handler configuration options](#handler-configuration-options)\n* [Events](#events)\n* [Administration](#administration)\n    * [Managing Qless](#managing-qless)\n* [Licence](#licence)\n\n\n## Installation\n\n[OpenResty](http://openresty.org/) is a superset of [Nginx](http://nginx.org), bundling [LuaJIT](http://luajit.org/) and the [lua-nginx-module](https://github.com/openresty/lua-nginx-module) as well as many other things. Whilst it is possible to build all of these things into Nginx yourself, we recommend using the latest OpenResty.\n\n\n### 1. Download and install:\n\n* [OpenResty](http://openresty.org/) \u003e= 1.11.x\n* [Redis](http://redis.io/download) \u003e= 2.8.x\n* [LuaRocks](https://luarocks.org/)\n\n\n### 2. Install Ledge using LuaRocks:\n\n```\nluarocks install ledge\n```\n\nThis will install the latest stable release, and all other Lua module dependencies, which if installing manually without LuaRocks are:\n\n* [lua-resty-http](https://github.com/pintsized/lua-resty-http)\n* [lua-resty-redis-connector](https://github.com/pintsized/lua-resty-redis-connector)\n* [lua-resty-qless](https://github.com/pintsized/lua-resty-qless)\n* [lua-resty-cookie](https://github.com/cloudflare/lua-resty-cookie)\n* [lua-ffi-zlib](https://github.com/hamishforbes/lua-ffi-zlib)\n* [lua-resty-upstream](https://github.com/hamishforbes/lua-resty-upstream) *(optional, for load balancing / healthchecking upstreams)*\n\n\n### 3. Review OpenResty documentation\n\nIf you are new to OpenResty, it's quite important to review the [lua-nginx-module](https://github.com/openresty/lua-nginx-module) documentation on how to run Lua code in Nginx, as the environment is unusual. Specifcally, it's useful to understand the meaning of the different Nginx phase hooks such as `init_by_lua` and `content_by_lua`, as well as how the `lua-nginx-module` locates Lua modules with the [lua_package_path](https://github.com/openresty/lua-nginx-module#lua_package_path) directive.\n\n[Back to TOC](#table-of-contents)\n\n\n## Philosophy and Nomenclature\n\nThe central module is called `ledge`, and provides factory methods for creating `handler` instances (for handling a request) and `worker` instances (for running background tasks). The `ledge` module is also where global configuration is managed.\n\nA `handler` is short lived. It is typically created at the beginning of the Nginx `content` phase for a request, and when its [run()](#handlerrun) method is called, takes responsibility for processing the current request and delivering a response. When [run()](#handlerrun) has completed, HTTP status, headers and body will have been delivered to the client.\n\nA `worker` is long lived, and there is one per Nginx worker process. It is created when Nginx starts a worker process, and dies when the Nginx worker dies. The `worker` pops queued background jobs and processes them.\n\nAn `upstream` is the only thing which must be manually configured, and points to another HTTP host where actual content lives. Typically one would use DNS to resolve client connections to the Nginx server running Ledge, and tell Ledge where to fetch from with the `upstream` configuration. As such, Ledge isn't designed to work as a forwarding proxy.\n\n[Redis](http://redis.io) is used for much more than cache storage. We rely heavily on its data structures to maintain cache `metadata`, as well as embedded Lua scripts for atomic task management and so on. By default, all cache body data and `metadata` will be stored in the same Redis instance. The location of cache `metadata` is global, set when Nginx starts up.\n\nCache body data is handled by the `storage` system, and as mentioned, by default shares the same Redis instance as the `metadata`. However, `storage` is abstracted via a [driver system](#storage_driver) making it possible to store cache body data in a separate Redis instance, or a group of horizontally scalable Redis instances via a [proxy](https://github.com/twitter/twemproxy), or to roll your own `storage` driver, for example targeting PostreSQL or even simply a filesystem. It's perhaps important to consider that by default all cache storage uses Redis, and as such is bound by system memory.\n\n[Back to TOC](#table-of-contents)\n\n### Cache keys\n\nA goal of any caching system is to safely maximise the HIT potential. That is, normalise factors which would split the cache wherever possible, in order to share as much cache as possible.\n\nThis is tricky to generalise, and so by default Ledge puts sane defaults from the request URI into the cache key, and provides a means for this to be customised by altering the [cache\\_key\\_spec](#cache_key_spec).\n\nURI arguments are sorted alphabetically by default, so `http://example.com?a=1\u0026b=2` would hit the same cache entry as `http://example.com?b=2\u0026a=1`.\n\n[Back to TOC](#table-of-contents)\n\n### Streaming design\n\nHTTP response sizes can be wildly different, sometimes tiny and sometimes huge, and it's not always possible to know the total size up front.\n\nTo guarantee predictable memory usage regardless of response sizes Ledge operates a streaming design, meaning it only ever operates on a single `buffer` per request at a time. This is equally true when fetching upstream to when reading from cache or serving to the client request.\n\nIt's also true (mostly) when processing [ESI](#edge-size-includes) instructions, except for in the case where an instruction is found to span multiple buffers. In this case, we continue buffering until a complete instruction can be understood, up to a [configurable limit](#esi_max_size).\n\nThis streaming design also improves latency, since we start serving the first `buffer` to the client request as soon as we're done with it, rather than fetching and saving an entire resource prior to serving. The `buffer` size can be [tuned](#buffer_size) even on a per `location` basis.\n\n[Back to TOC](#table-of-contents)\n\n### Collapsed forwarding\n\nLedge can attempt to collapse concurrent origin requests for known (previously) cacheable resources into a single upstream request. That is, if an upstream request for a resource is in progress, subsequent concurrent requests for the same resource will not bother the upstream, and instead wait for the first request to finish.\n\nThis is particularly useful to reduce upstream load if a spike of traffic occurs for expired and expensive content (since the chances of concurrent requests is higher on slower content).\n\n[Back to TOC](#table-of-contents)\n\n### Advanced cache patterns\n\nBeyond standard RFC compliant cache behaviours, Ledge has many features designed to maximise cache HIT rates and to reduce latency for requests. See the sections on [Edge Side Includes](#edge-side-includes), [serving stale](#serving-stale) and [revalidating on purge](#purging) for more information.\n\n[Back to TOC](#table-of-contents)\n\n\n## Minimal configuration\n\nAssuming you have Redis running on `localhost:6379`, and your upstream is at `localhost:8080`, add the following to the `nginx.conf` file in your OpenResty installation.\n\n```lua\nhttp {\n    if_modified_since Off;\n    lua_check_client_abort On;\n\n    init_by_lua_block {\n        require(\"ledge\").configure({\n            redis_connector_params = {\n                url = \"redis://127.0.0.1:6379/0\",\n            },\n        })\n\n        require(\"ledge\").set_handler_defaults({\n            upstream_host = \"127.0.0.1\",\n            upstream_port = 8080,\n        })\n    }\n\n    init_worker_by_lua_block {\n        require(\"ledge\").create_worker():run()\n    }\n\n    server {\n        server_name example.com;\n        listen 80;\n\n        location / {\n            content_by_lua_block {\n                require(\"ledge\").create_handler():run()\n            }\n        }\n    }\n}\n```\n\n[Back to TOC](#table-of-contents)\n\n\n## Config systems\n\nThere are four different layers to the configuration system. Firstly there is the main [Redis config](#ledgeconfigure) and [handler defaults](#ledgeset_handler_defaults) config, which are global and must be set during the Nginx `init` phase.\n\nBeyond this, you can specify [handler instance config](#ledgecreate_handler) on an Nginx `location` block basis, and finally there are some performance tuning config options for the [worker](#ledgecreate_worker) instances.\n\nIn addition, there is an [events system](#events-system) for binding Lua functions to mid-request events, proving opportunities to dynamically alter configuration.\n\n[Back to TOC](#table-of-contents)\n\n\n## Events system\n\nLedge makes most of its decisions based on the content it is working with. HTTP request and response headers drive the semantics for content delivery, and so rather than having countless configuration options to change this, we instead provide opportunities to alter the given semantics when necessary.\n\nFor example, if an `upstream` fails to set a long enough cache expiry, rather than inventing an option such as \"extend\\_ttl\", we instead would `bind` to the `after_upstream_request` event, and adjust the response headers to include the ttl we're hoping for.\n\n```lua\nhandler:bind(\"after_upstream_request\", function(res)\n    res.header[\"Cache-Control\"] = \"max-age=86400\"\nend)\n```\n\nThis particular event fires after we've fetched upstream, but before Ledge makes any decisions about whether the content can be cached or not. Once we've adjustead our headers, Ledge will read them as if they came from the upstream itself.\n\nNote that multiple functions can be bound to a single event, either globally or per handler, and they will be called in the order they were bound. There is also currently no means to inspect which functions have been bound, or to unbind them.\n\nSee the [events](#events) section for a complete list of events and their definitions.\n\n[Back to TOC](#table-of-contents)\n\n### Binding globally\n\nBinding a function globally means it will fire for the given event, on all requests. This is perhaps useful if you have many different `location` blocks, but need to always perform the same logic.\n\n```lua\ninit_by_lua_block {\n    require(\"ledge\").bind(\"before_serve\", function(res)\n        res.header[\"X-Foo\"] = \"bar\"   -- always set X-Foo to bar\n    end)\n}\n```\n\n[Back to TOC](#table-of-contents)\n\n### Binding to handlers\n\nMore commonly, we just want to alter behaviour for a given Nginx `location`.\n\n```lua\nlocation /foo_location {\n    content_by_lua_block {\n        local handler = require(\"ledge\").create_handler()\n\n        handler:bind(\"before_serve\", function(res)\n            res.header[\"X-Foo\"] = \"bar\"   -- only set X-Foo for this location\n        end)\n\n        handler:run()\n    }\n}\n```\n\n[Back to TOC](#table-of-contents)\n\n### Performance implications\n\nWriting simple logic for events is not expensive at all (and in many cases will be JIT compiled). If you need to consult service endpoints during an event then obviously consider that this will affect your overall latency, and make sure you do everything in a **non-blocking** way, e.g. using [cosockets](https://github.com/openresty/lua-nginx-module#ngxsockettcp) provided by OpenResty, or a driver based upon this.\n\nIf you have lots of event handlers, consider that creating closures in Lua is relatively expensive. A good solution would be to make your own module, and pass the defined functions in.\n\n```lua\nlocation /foo_location {\n    content_by_lua_block {\n        local handler = require(\"ledge\").create_handler()\n        handler:bind(\"before_serve\", require(\"my.handler.hooks\").add_foo_header)\n        handler:run()\n    }\n}\n```\n\n[Back to TOC](#table-of-contents)\n\n\n## Caching basics\n\nFor normal HTTP caching operation, no additional configuration is required. If the HTTP response indicates the resource can be cached, then it will cache it. If the HTTP request indicates it accepts cache, it will be served cache. Note that these two conditions aren't mutually exclusive - a request could specify `no-cache`, and this will indeed trigger a fetch upstream, but if the response is cacheable then it will be saved and served to subsequent cache-accepting requests.\n\nFor more information on the myriad factors affecting this, including end-to-end revalidation and so on, please refer to [RFC 7234](https://tools.ietf.org/html/rfc7234).\n\nThe goal is to be 100% RFC compliant, but with some extensions to allow more agressive caching in certain cases. If something doesn't work as you expect, please do feel free to [raise an issue](https://github.com/pintsized/ledge).\n\n[Back to TOC](#table-of-contents)\n\n\n## Purging\n\nTo manually invalidate a cache item (or purge), we support the non-standard `PURGE` method familiar to users of Squid. Send a HTTP request to the URI with the method set, and Ledge will attempt to invalidate the item, returning status `200` on success and `404` if the URI was not found in cache, along with a JSON body for more details.\n\nA purge request will affect all representations associated with the cache key, for example compressed and uncompressed responses separated by the `Vary: Accept-Encoding` response header will all be purged.\n\n`$\u003e curl -X PURGE -H \"Host: example.com\" http://cache.example.com/page1 | jq .`\n\n```json\n{\n    \"purge_mode\": \"invalidate\",\n    \"result\": \"nothing to purge\"\n}\n```\n\nThere are three purge modes, selectable by setting the `X-Purge` request header with one or more of the following values:\n\n* `invalidate`: (default) marks the item as expired, but doesn't delete anything.\n* `delete`: hard removes the item from cache\n* `revalidate`: invalidates but then schedules a background revalidation to re-prime the cache.\n\n`$\u003e curl -X PURGE -H \"X-Purge: revalidate\" -H \"Host: example.com\" http://cache.example.com/page1 | jq .`\n\n```json\n{\n  \"purge_mode\": \"revalidate\",\n  \"qless_job\": {\n    \"options\": {\n      \"priority\": 4,\n      \"jid\": \"5eeabecdc75571d1b93e9c942dfcebcb\",\n      \"tags\": [\n        \"revalidate\"\n      ]\n    },\n    \"jid\": \"5eeabecdc75571d1b93e9c942dfcebcb\",\n    \"klass\": \"ledge.jobs.revalidate\"\n  },\n  \"result\": \"already expired\"\n}\n```\n\nBackground revalidation jobs can be tracked in the qless metadata. See [managing qless](#managing-qless) for more information.\n\nIn general, `PURGE` is considered an administration task and probably shouldn't be allowed from the internet. Consider limiting it by IP address for example:\n\n```nginx\nlimit_except GET POST PUT DELETE {\n    allow   127.0.0.1;\n    deny    all;\n}\n```\n\n[Back to TOC](#table-of-contents)\n\n### JSON API\n\nA JSON based API is also available for purging cache multiple cache items at once.\nThis requires a `PURGE` request with a `Content-Type` header set to `application/json` and a valid JSON request body.\n\nValid parameters\n * `uris` - Array of URIs to purge, can contain wildcard URIs\n * `purge_mode` - As the `X-Purge` header in a normal purge request\n * `headers` - Hash of additional headers to include in the purge request\n\nReturns a results hash keyed by URI or a JSON error response\n\n`$\u003e curl -X PURGE -H \"Content-Type: Application/JSON\" http://cache.example.com/ -d '{\"uris\": [\"http://www.example.com/1\", \"http://www.example.com/2\"]}' | jq .`\n\n```json\n{\n  \"purge_mode\": \"invalidate\",\n  \"result\": {\n    \"http://www.example.com/1\": {\n      \"result\": \"purged\"\n    },\n    \"http://www.example.com/2\":{\n      \"result\": \"nothing to purge\"\n    }\n  }\n}\n```\n\n[Back to TOC](#table-of-contents)\n\n### Wildcard purging\n\nWildcard (\\*) patterns are also supported in `PURGE` URIs, which will always return a status of `200` and a JSON body detailing a background job. Wildcard purges involve scanning the entire keyspace, and so can take a little while. See [keyspace\\_scan\\_count](#keyspace_scan_count) for tuning help.\n\nIn addition, the `X-Purge` mode will propagate to all URIs purged as a result of the wildcard, making it possible to trigger site / section wide revalidation for example. Be careful what you wish for.\n\n`$\u003e curl -v -X PURGE -H \"X-Purge: revalidate\" -H \"Host: example.com\" http://cache.example.com/* | jq .`\n\n```json\n{\n  \"purge_mode\": \"revalidate\",\n  \"qless_job\": {\n    \"options\": {\n      \"priority\": 5,\n      \"jid\": \"b2697f7cb2e856cbcad1f16682ee20b0\",\n      \"tags\": [\n        \"purge\"\n      ]\n    },\n    \"jid\": \"b2697f7cb2e856cbcad1f16682ee20b0\",\n    \"klass\": \"ledge.jobs.purge\"\n  },\n  \"result\": \"scheduled\"\n}\n```\n\n[Back to TOC](#table-of-contents)\n\n\n## Serving stale\n\nContent is considered \"stale\" when its age is beyond its TTL. However, depending on the value of [keep_cache_for](#keep_cache_for) (which defaults to 1 month), we don't actually expire content in Redis straight away.\n\nThis allows us to implement the stale cache control extensions described in [RFC5861](https://tools.ietf.org/html/rfc5861), which provides request and response header semantics for describing how stale something can be served, when it should be revalidated in the background, and how long we can serve stale content in the event of upstream errors.\n\nThis can be very effective in ensuring a fast user experience. For example, if your content has a genuine `max-age` of 24 hours, consider changing this to 1 hour, and adding `stale-while-revalidate` for 23 hours. The total TTL is therefore the same, but the first request after the first hour will trigger backgrounded revalidation, extending the TTL for a further 1 hour + 23 hours.\n\nIf your origin server cannot be configured in this way, you can always override by [binding](#events) to the [before_save](#before_save) event.\n\n```lua\nhandler:bind(\"before_save\", function(res)\n    -- Valid for 1 hour, stale-while-revalidate for 23 hours, stale-if-error for three days\n    res.header[\"Cache-Control\"] = \"max-age=3600, stale-while-revalidate=82800, stale-if-error=259200\"\nend)\n```\n\nIn other words, set the TTL to the highest comfortable frequency of requests at the origin, and `stale-while-revalidate` to the longest comfortable TTL, to increase the chances of background revalidation occurring. Note that the first stale request will obviously get stale content, and so very long values can result in very out of date content for one request.\n\nAll stale behaviours are constrained by normal cache control semantics. For example, if the origin is down, and the response could be served stale due to the upstream error, but the request contains `Cache-Control: no-cache` or even `Cache-Control: max-age=60` where the content is older than 60 seconds, they will be served the error, rather than the stale content.\n\n[Back to TOC](#table-of-contents)\n\n\n## Edge Side Includes\n\nAlmost complete support for the [ESI 1.0 Language Specification](https://www.w3.org/TR/esi-lang) is included, with a few exceptions, and a few enhancements.\n\n```html\n\u003chtml\u003e\n\u003cesi:include=\"/header\" /\u003e\n\u003cbody\u003e\n\n   \u003cesi:choose\u003e\n      \u003cesi:when test=\"$(QUERY_STRING{foo}) == 'bar'\"\u003e\n         Hi\n      \u003c/esi:when\u003e\n      \u003cesi:otherwise\u003e\n         \u003cesi:choose\u003e\n            \u003cesi:when test=\"$(HTTP_COOKIE{mycookie}) == 'yep'\"\u003e\n               \u003cesi:include src=\"http://example.com/_fragments/fragment1\" /\u003e\n            \u003c/esi:when\u003e\n         \u003c/esi:choose\u003e\n      \u003c/esi:otherwise\u003e\n   \u003c/esi:choose\u003e\n\n\u003c/body\u003e\n\u003c/html\u003e\n```\n\n[Back to TOC](#table-of-contents)\n\n### Enabling ESI\n\nNote that simply [enabling](#esi_enabled) ESI might not be enough. We also check the [content type](#esi_content_types) against the allowed types specified, but more importantly ESI processing is contingent upon the [Edge Architecture Specification](https://www.w3.org/TR/edge-arch/). When enabled, Ledge will advertise capabilities upstream with the `Surrogate-Capability` request header, and expect the upstream response to include a `Surrogate-Control` header delegating ESI processing to Ledge.\n\nIf your upstream is not ESI aware, a common approach is to bind to the [after\\_upstream\\_request](#after_upstream_request) event in order to add the `Surrogate-Control` header manually. E.g.\n\n```lua\nhandler:bind(\"after_upstream_request\", function(res)\n    -- Don't enable ESI on redirect responses\n    -- Don't override Surrogate Control if it already exists\n    local status = res.status\n    if not res.header[\"Surrogate-Control\"] and not (status \u003e 300 and status \u003c 303) then\n        res.header[\"Surrogate-Control\"] = 'content=\"ESI/1.0\"'\n    end\nend)\n```\n\nNote that if ESI is processed, downstream cache-ability is automatically dropped since you don't want other intermediaries or browsers caching the result.\n\nIt's therefore best to only set `Surrogate-Control` for content which you know has ESI instructions. Whilst Ledge will detect the presence of ESI instructions when saving (and do nothing on cache HITs if no instructions are present), on a cache MISS it will have already dropped downstream cache headers before reading / saving the body. This is a side-effect of the [streaming design](#streaming-design).\n\n[Back to TOC](#table-of-contents)\n\n### Regular expressions in conditions\n\nIn addition to the operators defined in the\n[ESI specification](https://www.w3.org/TR/esi-lang), we also support regular\nexpressions in conditions (as string literals), using the `=~` operator.\n\n```html\n\u003cesi:choose\u003e\n   \u003cesi:when test=\"$(QUERY_STRING{name}) =~ '/james|john/i'\"\u003e\n      Hi James or John\n   \u003c/esi:when\u003e\n\u003c/esi:choose\u003e\n```\n\nSupported modifiers are as per the [ngx.re.\\*](https://github.com/openresty/lua-nginx-module#ngxrematch) documentation.\n\n[Back to TOC](#table-of-contents)\n\n### Custom ESI variables\n\nIn addition to the variables defined in the [ESI specification](https://www.w3.org/TR/esi-lang), it is possible to provide run time custom variables using the [esi_custom_variables](#esi_custom_variables) handler config option.\n\n```lua\ncontent_by_lua_block {\n   require(\"ledge\").create_handler({\n      esi_custom_variables = {\n         messages = {\n            foo = \"bar\",\n         },\n      },\n   }):run()\n}\n```\n\n```html\n\u003cesi:vars\u003e$(MESSAGES{foo})\u003c/esi:vars\u003e\n```\n\n[Back to TOC](#table-of-contents)\n\n### ESI Args\n\nIt can be tempting to use URI arguments to pages using ESI in order to change layout dynamically, but this comes at the cost of generating multiple cache items - one for each permutation of URI arguments.\n\nESI args is a neat feature to get around this, by using a configurable [prefix](#esi_args_prefix), which defaults to `esi_`. URI arguments with this prefix are removed from the cache key and also from upstream requests, and instead stuffed into the `$(ESI_ARGS{foo})` variable for use in ESI, typically in conditions. That is, think of them as magic URI arguments which have meaning for the ESI processor only, and should never affect cacheability or upstream content generation.\n\n`$\u003e curl -H \"Host: example.com\" http://cache.example.com/page1?esi_display_mode=summary`\n\n```html\n\u003cesi:choose\u003e\n   \u003cesi:when test=\"$(ESI_ARGS{display_mode}) == 'summary'\"\u003e\n      \u003c!-- SUMMARY --\u003e\n   \u003c/esi:when\u003e\n   \u003cesi:when test=\"$(ESI_ARGS{display_mode}) == 'details'\"\u003e\n      \u003c!-- DETAILS --\u003e\n   \u003c/esi:when\u003e\n\u003c/esi:choose\u003e\n```\n\nIn this example, the `esi_display_mode` values of `summary` or `details` will return the same cache HIT, but display different content.\n\nIf `$(ESI_ARGS)` is used without a field key, it renders the original query string arguments, e.g. `esi_foo=bar\u0026esi_display_mode=summary`, URL encoded.\n\n[Back to TOC](#table-of-contents)\n\n\n### Variable Escaping\n\nESI variables are minimally escaped by default in order to prevent user's injecting additional ESI tags or XSS exploits.\n\nUnescaped variables are available by prefixing the variable name with `RAW_`. This should be used with care.\n\n```html\n# /esi/test.html?a=\u003cscript\u003ealert()\u003c/script\u003e\n\u003cesi:vars\u003e\n$(QUERY_STRING{a})     \u003c!-- \u0026lt;script\u0026gt;alert()\u0026lt;/script\u0026gt; --\u003e\n$(RAW_QUERY_STRING{a}) \u003c!--  \u003cscript\u003ealert()\u003c/script\u003e --\u003e\n\u003c/esi:vars\u003e\n```\n\n[Back to TOC](#table-of-contents)\n\n### Missing ESI features\n\nThe following parts of the [ESI specification](https://www.w3.org/TR/esi-lang) are not supported, but could be in due course if a need is identified.\n\n* `\u003cesi:inline\u003e` not implemented (or advertised as a capability).\n* No support for the `onerror` or `alt` attributes for `\u003cesi:include\u003e`. Instead, we \"continue\" on error by default.\n* `\u003cesi:try | attempt | except\u003e` not implemented.\n* The \"dictionary (special)\" substructure variable type for `HTTP_USER_AGENT` is not implemented.\n\n[Back to TOC](#table-of-contents)\n\n\n## API\n\n### ledge.configure\n\nsyntax: `ledge.configure(config)`\n\nThis function provides Ledge with Redis connection details for all cache `metadata` and background jobs. This is global and cannot be specified or adjusted outside the Nginx `init` phase.\n\n```lua\ninit_by_lua_block {\n    require(\"ledge\").configure({\n        redis_connector_params = {\n            url = \"redis://mypassword@127.0.0.1:6380/3\",\n        }\n        qless_db = 4,\n    })\n}\n```\n\n`config` is a table with the following options (unrecognised config will error hard on start up).\n\n[Back to TOC](#table-of-contents)\n\n\n#### redis_connector_params\n\n`default: {}`\n\nLedge uses [lua-resty-redis-connector](https://github.com/pintsized/lua-resty-redis-connector) to handle all Redis connections. It simply passes anything given in `redis_connector_params` straight to [lua-resty-redis-connector](https://github.com/pintsized/lua-resty-redis-connector), so review the documentation there for options, including how to use [Redis Sentinel](https://redis.io/topics/sentinel).\n\n\n#### qless_db\n\n`default: 1`\n\nSpecifies the Redis DB number to store [qless](https://github.com/pintsized/lua-resty-qless) background job data.\n\n[Back to TOC](#table-of-contents)\n\n\n### ledge.set\\_handler\\_defaults\n\nsyntax: `ledge.set_handler_defaults(config)`\n\nThis method overrides the default configuration used for all spawned request `handler` instances. This is global and cannot be specified or adjusted outside the Nginx `init` phase, but these defaults can be overriden on a per `handler` basis. See [below](#handler-configuration-options) for a complete list of configuration options.\n\n```lua\ninit_by_lua_block {\n    require(\"ledge\").set_handler_defaults({\n        upstream_host = \"127.0.0.1\",\n        upstream_port = 8080,\n    })\n}\n```\n\n[Back to TOC](#table-of-contents)\n\n\n### ledge.create\\_handler\n\nsyntax: `local handler = ledge.create_handler(config)`\n\nCreates a `handler` instance for the current reqiest. Config given here will be merged with the defaults, allowing certain options to be adjusted on a per Nginx `location` basis.\n\n```lua\nserver {\n    server_name example.com;\n    listen 80;\n\n    location / {\n        content_by_lua_block {\n            require(\"ledge\").create_handler({\n                upstream_port = 8081,\n            }):run()\n        }\n    }\n}\n```\n\n[Back to TOC](#table-of-contents)\n\n\n### ledge.create\\_worker\n\nsyntax: `local worker = ledge.create_worker(config)`\n\nCreates a `worker` instance inside the current Nginx worker process, for processing background jobs. You only need to call this once inside a single `init_worker` block, and it will be called for each Nginx worker that is configured.\n\nJob queues can be run at varying amounts of concurrency per worker, which can be set by providing `config` here. See [managing qless](#managing-qless) for more details.\n\n```lua\ninit_worker_by_lua_block {\n    require(\"ledge\").create_worker({\n        interval = 1,\n        gc_queue_concurrency = 1,\n        purge_queue_concurrency = 2,\n        revalidate_queue_concurrency = 5,\n    }):run()\n}\n```\n\n[Back to TOC](#table-of-contents)\n\n\n### ledge.bind\n\nsyntax: `ledge.bind(event_name, callback)`\n\nBinds the `callback` function to the event given in `event_name`, globally for all requests on this system. Arguments to `callback` vary based on the event. See [below](#events) for event definitions.\n\n[Back to TOC](#table-of-contents)\n\n\n### handler.bind\n\nsyntax: `handler:bind(event_name, callback)`\n\nBinds the `callback` function to the event given in `event_name` for this handler only. Note the `:` in `handler:bind()` which differs to the global `ledge.bind()`.\n\nArguments to `callback` vary based on the event. See [below](#events) for event definitions.\n\n[Back to TOC](#table-of-contents)\n\n\n### handler.run\n\nsyntax: `handler:run()`\n\nMust be called during the `content_by_lua` phase. It processes the current request and serves a response. If you fail to call this method in your `location` block, nothing will happen.\n\n[Back to TOC](#table-of-contents)\n\n\n### worker.run\n\nsyntax: `handler:run()`\n\nMust be called during the `init_worker` phase, otherwise background tasks will not be run, including garbage collection which is very importatnt.\n\n[Back to TOC](#table-of-contents)\n\n\n### Handler configuration options\n\n* [storage_driver](#storage_driver)\n* [storage_driver_config](#storage_driver_config)\n* [origin_mode](#origin_mode)\n* [upstream_connect_timeout](#upstream_connect_timeout)\n* [upstream_send_timeout](#upstream_send_timeout)\n* [upstream_read_timeout](#upstream_read_timeout)\n* [upstream_keepalive_timeout](#upstream_keepalive_timeout)\n* [upstream_keepalive_poolsize](#upstream_keepalive_poolsize)\n* [upstream_host](#upstream_host)\n* [upstream_port](#upstream_port)\n* [upstream_use_ssl](#upstream_use_ssl)\n* [upstream_ssl_server_name](#upstream_ssl_server_name)\n* [upstream_ssl_verify](#upstream_ssl_verify)\n* [buffer_size](#buffer_size)\n* [advertise_ledge](#buffer_size)\n* [keep_cache_for](#buffer_size)\n* [minimum_old_entity_download_rate](#minimum_old_entity_download_rate)\n* [esi_enabled](#esi_enabled)\n* [esi_content_types](#esi_content_types)\n* [esi_allow_surrogate_delegation](#esi_allow_surrogate_delegation)\n* [esi_recursion_limit](#esi_recursion_limit)\n* [esi_args_prefix](#esi_args_prefix)\n* [esi_custom_variables](#esi_custom_variables)\n* [esi_max_size](#esi_max_size)\n* [esi_attempt_loopback](#esi_attempt_loopback)\n* [esi_vars_cookie_blacklist](#esi_vars_cookie_blacklist)\n* [esi_disable_third_party_includes](#esi_disable_third_party_includes)\n* [esi_third_party_includes_domain_whitelist](#esi_third_party_includes_domain_whitelist)\n* [enable_collapsed_forwarding](#enable_collapsed_forwarding)\n* [collapsed_forwarding_window](#collapsed_forwarding_window)\n* [gunzip_enabled](#gunzip_enabled)\n* [keyspace_scan_count](#keyspace_scan_count)\n* [cache_key_spec](#cache_key_spec)\n* [max_uri_args](#max_uri_args)\n\n\n#### storage_driver\n\ndefault: `ledge.storage.redis`\n\nThis is a `string` value, which will be used to attempt to load a storage driver. Any third party driver here can accept its own config options (see below), but must provide the following interface:\n\n* `bool new()`\n* `bool connect()`\n* `bool close()`\n* `number get_max_size()` *(return nil for no max)*\n* `bool exists(string entity_id)`\n* `bool delete(string entity_id)`\n* `bool set_ttl(string entity_id, number ttl)`\n* `number get_ttl(string entity_id)`\n* `function get_reader(object response)`\n* `function get_writer(object response, number ttl, function onsuccess, function onfailure)`\n\n*Note, whilst it is possible to configure storage drivers on a per `location` basis, it is **strongly** recommended that you never do this, and consider storage drivers to be system wide, much like the main Redis config. If you really need differenet storage driver configurations for different locations, then it will work, but features such as purging using wildcards will silently not work. YMMV.*\n\n[Back to TOC](#handler-configuration-options)\n\n\n#### storage_driver_config\n\n`default: {}`\n\nStorage configuration can vary based on the driver. Currently we only have a Redis driver.\n\n[Back to TOC](#handler-configuration-options)\n\n\n##### Redis storage driver config\n\n* `redis_connector_params` Redis params table, as per [lua-resty-redis-connector](https://github.com/pintsized/lua-resty-redis-connector)\n* `max_size` (bytes), defaults to `1MB`\n* `supports_transactions` defaults to `true`, set to false if using a Redis proxy.\n\nIf `supports_transactions` is set to `false`, cache bodies are not written atomically. However, if there is an error writing, the main Redis system will be notified and the overall transaction will be aborted. The result being potentially orphaned body entities in the storage system, which will hopefully eventually expire. The only reason to turn this off is if you are using a Redis proxy, as any transaction related commands will break the connection.\n\n[Back to TOC](#handler-configuration-options)\n\n\n#### upstream_connect_timeout\n\ndefault: `1000 (ms)`\n\nMaximum time to wait for an upstream connection (in milliseconds). If it is exceeded, we send a `503` status code, unless [stale_if_error](#stale_if_error) is configured.\n\n[Back to TOC](#handler-configuration-options)\n\n\n#### upstream_send_timeout\n\ndefault: `2000 (ms)`\n\nMaximum time to wait sending data on a connected upstream socket (in milliseconds). If it is exceeded, we send a `503` status code, unless [stale_if_error](#stale_if_error) is configured.\n\n[Back to TOC](#handler-configuration-options)\n\n\n#### upstream_read_timeout\n\ndefault: `10000 (ms)`\n\nMaximum time to wait on a connected upstream socket (in milliseconds). If it is exceeded, we send a `503` status code, unless [stale_if_error](#stale_if_error) is configured.\n\n[Back to TOC](#handler-configuration-options)\n\n\n#### upstream_keepalive_timeout\n\ndefault: `75000`\n\n[Back to TOC](#handler-configuration-options)\n\n\n#### upstream_keepalive_poolsize\n\ndefault: `64`\n\n[Back to TOC](#handler-configuration-options)\n\n\n#### upstream_host\n\ndefault: `\"\"`\n\nSpecifies the hostname or IP address of the upstream host. If a hostname is specified, you must configure the Nginx [resolver](http://nginx.org/en/docs/http/ngx_http_core_module.html#resolver) somewhere, for example:\n\n```nginx\nresolver 8.8.8.8;\n```\n\n[Back to TOC](#handler-configuration-options)\n\n\n#### upstream_port\n\ndefault: `80`\n\nSpecifies the port of the upstream host.\n\n[Back to TOC](#handler-configuration-options)\n\n\n#### upstream_use_ssl\n\ndefault: `false`\n\nToggles the use of SSL on the upstream connection. Other `upstream_ssl_*` options will be ignored if this is not set to `true`.\n\n[Back to TOC](#handler-configuration-options)\n\n\n#### upstream_ssl_server_name\n\ndefault: `\"\"`\n\nSpecifies the SSL server name used for Server Name Indication (SNI). See [sslhandshake](https://github.com/openresty/lua-nginx-module#tcpsocksslhandshake) for more information.\n\n[Back to TOC](#handler-configuration-options)\n\n\n#### upstream_ssl_verify\n\ndefault: `true`\n\nToggles SSL verification. See [sslhandshake](https://github.com/openresty/lua-nginx-module#tcpsocksslhandshake) for more information.\n\n[Back to TOC](#handler-configuration-options)\n\n\n#### cache_key_spec\n\n`default: cache_key_spec = { \"scheme\", \"host\", \"uri\", \"args\" },`\n\nSpecifies the format for creating cache keys. The default spec above will create keys in Redis similar to:\n\n```\nledge:cache:http:example.com:/about::\nledge:cache:http:example.com:/about:p=2\u0026q=foo:\n```\n\nThe list of available string identifiers in the spec is:\n\n* `scheme` either http or https\n* `host` the hostname of the current request\n* `port` the public port of the current request\n* `uri` the URI (without args)\n* `args` the URI args, sorted alphabetically\n\nIn addition to these string identifiers, dynamic parameters can be added to the cache key by providing functions. Any functions given must expect no arguments and return a string value.\n\n```lua\nlocal function get_device_type()\n    -- dynamically work out device type\n    return \"tablet\"\nend\n\nrequire(\"ledge\").create_handler({\n    cache_key_spec = {\n        get_device_type,\n        \"scheme\",\n        \"host\",\n        \"uri\",\n        \"args\",\n    }\n}):run()\n```\n\nConsider leveraging vary, via the [before_vary_selection](#before_vary_selection) event, for separating cache entries rather than modifying the main `cache_key_spec` directly.\n\n[Back to TOC](#handler-configuration-options)\n\n\n#### origin_mode\n\ndefault: `ledge.ORIGIN_MODE_NORMAL`\n\nDetermines the overall behaviour for connecting to the origin. `ORIGIN_MODE_NORMAL` will assume the origin is up, and connect as necessary.\n\n`ORIGIN_MODE_AVOID` is similar to Squid's `offline_mode`, where any retained cache (expired or not) will be served rather than trying the origin, regardless of cache-control headers, but the origin will be tried if there is no cache to serve.\n\n`ORIGIN_MODE_BYPASS` is the same as `AVOID`, except if there is no cache to serve we send a `503 Service Unavailable` status code to the client and never attempt an upstream connection.\n\n[Back to TOC](#handler-configuration-options)\n\n\n#### keep_cache_for\n\ndefault: `86400 * 30 (1 month in seconds)`\n\nSpecifies how long to retain cache data past its expiry date. This allows us to serve stale cache in the event of upstream failure with [stale_if_error](#stale_if_error) or [origin_mode](#origin_mode) settings.\n\nItems will be evicted when under memory pressure provided you are using one of the Redis [volatile eviction policies](http://redis.io/topics/lru-cache), so there should generally be no real need to lower this for space reasons.\n\nItems at the extreme end of this (i.e. nearly a month old) are clearly very rarely requested, or more likely, have been removed at the origin.\n\n[Back to TOC](#handler-configuration-options)\n\n\n#### minimum_old_entity_download_rate\n\ndefault: `56 (kbps)`\n\nClients reading slower than this who are also unfortunate enough to have started reading from an entity which has been replaced (due to another client causing a revalidation for example), may have their entity garbage collected before they finish, resulting in an incomplete resource being delivered.\n\nLowering this is fairer on slow clients, but widens the potential window for multiple old entities to stack up, which in turn could threaten Redis storage space and force evictions.\n\n[Back to TOC](#handler-configuration-options)\n\n\n#### enable_collapsed_forwarding\n\ndefault: `false`\n\n[Back to TOC](#handler-configuration-options)\n\n\n#### collapsed_forwarding_window\n\nWhen collapsed forwarding is enabled, if a fatal error occurs during the origin request, the collapsed requests may never receive the response they are waiting for. This setting puts a limit on how long they will wait, and how long before new requests will decide to try the origin for themselves.\n\nIf this is set shorter than your origin takes to respond, then you may get more upstream requests than desired. Fatal errors (server reboot etc) may result in hanging connections for up to the maximum time set. Normal errors (such as upstream timeouts) work independently of this setting.\n\n[Back to TOC](#handler-configuration-options)\n\n\n#### gunzip_enabled\n\ndefault: `true`\n\nWith this enabled, gzipped responses will be uncompressed on the fly for clients that do not set `Accept-Encoding: gzip`. Note that if we receive a gzipped response for a resource containing ESI instructions, we gunzip whilst saving and store uncompressed, since we need to read the ESI instructions.\n\nAlso note that `Range` requests for gzipped content must be ignored - the full response will be returned.\n\n[Back to TOC](#handler-configuration-options)\n\n\n#### buffer_size\n\ndefault: `2^16 (64KB in bytes)`\n\nSpecifies the internal buffer size (in bytes) used for data to be read/written/served. Upstream responses are read in chunks of this maximum size, preventing allocation of large amounts of memory in the event of receiving large files. Data is also stored internally as a list of chunks, and delivered to the Nginx output chain buffers in the same fashion.\n\nThe only exception is if ESI is configured, and Ledge has determined there are ESI instructions to process, and any of these instructions span a given chunk. In this case, buffers are concatenated until a complete instruction is found, and then ESI operates on this new buffer, up to a maximum of [esi_max_size](#esi_max_size).\n\n[Back to TOC](#handler-configuration-options)\n\n\n#### keyspace_scan_count\n\ndefault: `1000`\n\nTunes the behaviour of keyspace scans, which occur when sending a PURGE request with wildcard syntax. A higher number may be better if latency to Redis is high and the keyspace is large.\n\n[Back to TOC](#handler-configuration-options)\n\n\n#### max_uri_args\n\ndefault: `100`\n\nLimits the number of URI arguments returned in calls to [ngx.req.get_uri_args()](https://github.com/openresty/lua-nginx-module#ngxreqget_uri_args), to protect against DOS attacks.\n\n[Back to TOC](#handler-configuration-options)\n\n\n#### esi_enabled\n\ndefault: `false`\n\nToggles [ESI](http://www.w3.org/TR/esi-lang) scanning and processing, though behaviour is also contingent upon [esi_content_types](#esi_content_types) and [esi_surrogate_delegation](#esi_surrogate_delegation) settings, as well as `Surrogate-Control` / `Surrogate-Capability` headers.\n\nESI instructions are detected on the slow path (i.e. when fetching from the origin), so only instructions which are known to be present are processed on cache HITs.\n\n[Back to TOC](#handler-configuration-options)\n\n\n#### esi_content_types\n\ndefault: `{ text/html }`\n\nSpecifies content types to perform ESI processing on. All other content types will not be considered for processing.\n\n[Back to TOC](#handler-configuration-options)\n\n\n#### esi_allow_surrogate_delegation\n\ndefault: false\n\n[ESI Surrogate Delegation](http://www.w3.org/TR/edge-arch) allows for downstream intermediaries to advertise a capability to process ESI instructions nearer to the client. By setting this to `true` any downstream offering this will disable ESI processing in Ledge, delegating it downstream.\n\nWhen set to a Lua table of IP address strings, delegation will only be allowed to this specific hosts. This may be important if ESI instructions contain sensitive data which must be removed.\n\n[Back to TOC](#handler-configuration-options)\n\n\n#### esi_recursion_limit\n\ndefault: 10\n\nLimits fragment inclusion nesting, to avoid accidental infinite recursion.\n\n[Back to TOC](#handler-configuration-options)\n\n\n#### esi_args_prefix\n\ndefault: \"esi\\_\"\n\nURI args prefix for parameters to be ignored from the cache key (and not proxied upstream), for use exclusively with ESI rendering logic. Set to nil to disable the feature.\n\n[Back to TOC](#handler-configuration-options)\n\n\n#### esi_custom_variables\n\ndefualt: `{}`\n\nAny variables supplied here will be available anywhere ESI vars can be used evaluated. See [Custom ESI variables](#custom-esi-variables).\n\n[Back to TOC](#handler-configuration-options)\n\n\n#### esi_max_size\n\ndefault: `1024 * 1024 (bytes)`\n\n[Back to TOC](#handler-configuration-options)\n\n\n#### esi_attempt_loopback\n\ndefault: `true`\n\nIf an ESI subrequest has the same `scheme` and `host` as the parent request, we loopback the connection to the current\n`server_addr` and `server_port` in order to avoid going over network.\n\n[Back to TOC](#handler-configuration-options)\n\n\n#### esi_vars_cookie_blacklist\n\ndefault: `{}`\n\nCookie names given here will not be expandable as ESI variables: e.g. `$(HTTP_COOKIE)` or `$(HTTP_COOKIE{foo})`. However they\nare not removed from the request data, and will still be propagated to `\u003cesi:include\u003e` subrequests.\n\nThis is useful if your client is sending a sensitive cookie that you don't ever want to accidentally evaluate in server output.\n\n```lua\nrequire(\"ledge\").create_handler({\n    esi_vars_cookie_blacklist = {\n        secret = true,\n        [\"my-secret-cookie\"] = true,\n    }\n}):run()\n```\n\nCookie names are given as the table key with a truthy value, for O(1) runtime lookup.\n\n\n[Back to TOC](#handler-configuration-options)\n\n\n#### esi_disable_third_party_includes\n\ndefault: `false`\n\n`\u003cesi:include\u003e` tags can make requests to any arbitrary URI. Turn this on to ensure the URI domain must match the URI of the current request.\n\n[Back to TOC](#handler-configuration-options)\n\n\n#### esi_third_party_includes_domain_whitelist\n\ndefault: `{}`\n\nIf third party includes are disabled, you can also explicitly provide a whitelist of allowed third party domains.\n\n```lua\nrequire(\"ledge\").create_handler({\n    esi_disable_third_party_includes = true,\n    esi_third_party_includes_domain_whitelist = {\n        [\"example.com\"] = true,\n    }\n}):run()\n```\n\nHostnames are given as the table key with a truthy value, for O(1) lookup.\n\n*Note; This behaviour was introduced in v2.2*\n\n[Back to TOC](#handler-configuration-options)\n\n\n#### advertise_ledge\n\ndefault `true`\n\nIf set to false, disables advertising the software name and version, e.g. `(ledge/2.01)` from the `Via` response header.\n\n[Back to TOC](#handler-configuration-options)\n\n\n### Events\n\n* [after_cache_read](#after_cache_read)\n* [before_upstream_connect](#before_upstream_connect)\n* [before_upstream_request](#before_upstream_request)\n* [before_esi_inclulde_request\"](#before_esi_include_request)\n* [after_upstream_request](#after_upstream_request)\n* [before_save](#before_save)\n* [before_serve](#before_serve)\n* [before_save_revalidation_data](#before_save_revalidation_data)\n* [before_vary_selection](#before_vary_selection)\n\n#### after_cache_read\n\nsyntax: `bind(\"after_cache_read\", function(res) -- end)`\n\nparams: `res`. The cached response table.\n\nFires directly after the response was successfully loaded from cache.\n\nThe `res` table given contains:\n\n* `res.header` the table of case-insenitive HTTP response headers\n* `res.status` the HTTP response status code\n\n*Note; there are other fields and methods attached, but it is strongly advised to never adjust anything other than the above*\n\n[Back to TOC](#events)\n\n\n#### before_upstream_connect\n\nsyntax: `bind(\"before_upstream_connect\", function(handler) -- end)`\n\nparams: `handler`. The current handler instance.\n\nFires before the default `handler.upstream_client` is created, allowing a pre-connected HTTP client to be externally provided. The client must be API compatible with [lua-resty-http](https://github.com/pintsized/lua-resty-http). For example, using [lua-resty-upstream](https://github.com/hamishforbes/lua-resty-upstream) for load balancing.\n\n[Back to TOC](#events)\n\n\n#### before_upstream_request\n\nsyntax: `bind(\"before_upstream_request\", function(req_params) -- end)`\n\nparams: `req_params`. The table of request params about to send to the [request](https://github.com/pintsized/lua-resty-http#request) method.\n\nFires when about to perform an upstream request.\n\n[Back to TOC](#events)\n\n\n#### before_esi_include_request\n\nsyntax: `bind(\"before_esi_include_request\", function(req_params) -- end)`\n\nparams: `req_params`. The table of request params about to be used for an ESI include, via the [request](https://github.com/pintsized/lua-resty-http#request) method.\n\nFires when about to perform a HTTP request on behalf of an ESI include instruction.\n\n[Back to TOC](#events)\n\n\n#### after_upstream_request\n\nsyntax: `bind(\"after_upstream_request\", function(res) -- end)`\n\nparams: `res` The response table.\n\nFires when the status / headers have been fetched, but before the body it is stored. Typically used to override cache headers before we decide what to do with this response.\n\nThe `res` table given contains:\n\n* `res.header` the table of case-insenitive HTTP response headers\n* `res.status` the HTTP response status code\n\n*Note; there are other fields and methods attached, but it is strongly advised to never adjust anything other than the above*\n\n*Note: unlike `before_save` below, this fires for all fetched content, not just cacheable content.*\n\n[Back to TOC](#events)\n\n\n#### before_save\n\nsyntax: `bind(\"before_save\", function(res) -- end)`\n\nparams: `res` The response table.\n\nFires when we're about to save the response.\n\nThe `res` table given contains:\n\n* `res.header` the table of case-insenitive HTTP response headers\n* `res.status` the HTTP response status code\n\n*Note; there are other fields and methods attached, but it is strongly advised to never adjust anything other than the above*\n\n[Back to TOC](#events)\n\n\n#### before_serve\n\nsyntax: `ledge:bind(\"before_serve\", function(res) -- end)`\n\nparams: `res` The `ledge.response` object.\n\nFires when we're about to serve. Often used to modify downstream headers.\n\nThe `res` table given contains:\n\n* `res.header` the table of case-insenitive HTTP response headers\n* `res.status` the HTTP response status code\n\n*Note; there are other fields and methods attached, but it is strongly advised to never adjust anything other than the above*\n\n[Back to TOC](#events)\n\n\n#### before_save_revalidation_data\n\nsyntax: `bind(\"before_save_revalidation_data\", function(reval_params, reval_headers) -- end)`\n\nparams: `reval_params`. Table of revalidation params.\n\nparams: `reval_headers`. Table of revalidation HTTP headers.\n\nFires when a background revalidation is triggered or when cache is being saved. Allows for modifying the headers and paramters (such as connection parameters) which are inherited by the background revalidation.\n\nThe `reval_params` are values derived from the current running configuration for:\n\n* server_addr\n* server_port\n* scheme\n* uri\n* connect_timeout\n* read_timeout\n* ssl_server_name\n* ssl_verify\n\n[Back to TOC](#events)\n\n\n#### before_vary_selection\n\nsyntax: `bind(\"before_vary_selection\", function(vary_key) -- end)`\n\nparams: `vary_key` A table of selecting headers\n\nFires when we're about to generate the vary key, used to select the correct cache representation.\n\nThe `vary_key` table is a hash of header field names (lowercase) to values.\nA field name which exists in the Vary response header but does not exist in the current request header will have a value of `ngx.null`.\n\n```\nRequest Headers:\n    Accept-Encoding: gzip\n    X-Test: abc\n    X-test: def\n\nResponse Headers:\n    Vary: Accept-Encoding, X-Test\n    Vary: X-Foo\n\nvary_key table:\n{\n    [\"accept-encoding\"] = \"gzip\",\n    [\"x-test\"] = \"abc,def\",\n    [\"x-foo\"] = ngx.null\n}\n```\n\n[Back to TOC](#events)\n\n\n## Administration\n\n### X-Cache\n\nLedge adds the non-standard `X-Cache` header, familiar to users of other caches. It indicates simply `HIT` or `MISS` and the host name in question, preserving upstream values when more than one cache server is in play.\n\nIf a resource is considered not cacheable, the `X-Cache` header will not be present in the response.\n\nFor example:\n\n* `X-Cache: HIT from ledge.tld` *A cache hit, with no (known) cache layer upstream.*\n* `X-Cache: HIT from ledge.tld, HIT from proxy.upstream.tld` *A cache hit, also hit upstream.*\n* `X-Cache: MISS from ledge.tld, HIT from proxy.upstream.tld` *A cache miss, but hit upstream.*\n* `X-Cache: MISS from ledge.tld, MISS from proxy.upstream.tld` *Regenerated at the origin.*\n\n[Back to TOC](#table-of-contents)\n\n\n### Logging\n\nIt's often useful to add some extra headers to your Nginx logs, for example\n\n```\nlog_format ledge  '$remote_addr - $remote_user [$time_local] '\n                  '\"$request\" $status $body_bytes_sent '\n                  '\"$http_referer\" \"$http_user_agent\" '\n                  '\"Cache:$sent_http_x_cache\"  \"Age:$sent_http_age\" \"Via:$sent_http_via\"'\n                  ;\n\naccess_log /var/log/nginx/access_log ledge;\n```\n\nWill give log lines such as:\n\n```\n192.168.59.3 - - [23/May/2016:22:22:18 +0000] \"GET /x/y/z HTTP/1.1\" 200 57840 \"-\" \"curl/7.37.1\"\"Cache:HIT from 159e8241f519:8080\"  \"Age:724\"\n\n```\n[Back to TOC](#table-of-contents)\n\n\n### Managing Qless\n\nLedge uses [lua-resty-qless](https://github.com/pintsized/lua-resty-qless) to schedule and process background tasks, which are stored in Redis.\n\nJobs are scheduled for background revalidation requests as well as wildcard PURGE requests, but most importantly for garbage collection of replaced body entities.\n\nThat is, it's very important that jobs are being run properly and in a timely fashion.\n\nInstalling the [web user interface](https://github.com/hamishforbes/lua-resty-qless-web) can be very helpful to check this.\n\nYou may also wish to tweak the [qless job history](https://github.com/pintsized/lua-resty-qless#configuration-options) settings if it takes up too much space.\n\n\n[Back to TOC](#table-of-contents)\n\n\n## Author\n\nJames Hurst \u003cjames@pintsized.co.uk\u003e\n\n\n## Licence\n\nThis module is licensed under the 2-clause BSD license.\n\nCopyright (c) James Hurst \u003cjames@pintsized.co.uk\u003e\n\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:\n\n* Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.\n\n* Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n","funding_links":["https://github.com/sponsors/pintsized"],"categories":["Lua"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fledgetech%2Fledge","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fledgetech%2Fledge","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fledgetech%2Fledge/lists"}