{"id":19212424,"url":"https://github.com/leafo/web_sanitize","last_synced_at":"2025-10-06T21:58:52.183Z","repository":{"id":7271852,"uuid":"8585231","full_name":"leafo/web_sanitize","owner":"leafo","description":"Lua library for sanitizing, parsing, and editing untrusted HTML","archived":false,"fork":false,"pushed_at":"2023-05-19T19:51:24.000Z","size":410,"stargazers_count":67,"open_issues_count":1,"forks_count":10,"subscribers_count":5,"default_branch":"master","last_synced_at":"2025-08-02T13:45:28.750Z","etag":null,"topics":["css-sanitization","html","html-parser","html-sanitization","lua","moonscript","security"],"latest_commit_sha":null,"homepage":"","language":"MoonScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/leafo.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2013-03-05T17:37:46.000Z","updated_at":"2025-05-27T06:56:21.000Z","dependencies_parsed_at":"2024-11-09T13:47:05.854Z","dependency_job_id":"551ec00f-2ead-42b0-99f7-e85ac57b7404","html_url":"https://github.com/leafo/web_sanitize","commit_stats":null,"previous_names":[],"tags_count":13,"template":false,"template_full_name":null,"purl":"pkg:github/leafo/web_sanitize","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/leafo%2Fweb_sanitize","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/leafo%2Fweb_sanitize/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/leafo%2Fweb_sanitize/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/leafo%2Fweb_sanitize/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/leafo","download_url":"https://codeload.github.com/leafo/web_sanitize/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/leafo%2Fweb_sanitize/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":278686633,"owners_count":26028326,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-10-06T02:00:05.630Z","response_time":65,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["css-sanitization","html","html-parser","html-sanitization","lua","moonscript","security"],"created_at":"2024-11-09T13:46:58.696Z","updated_at":"2025-10-06T21:58:52.152Z","avatar_url":"https://github.com/leafo.png","language":"MoonScript","readme":"# web\\_sanitize\n\n![test](https://github.com/leafo/web_sanitize/workflows/test/badge.svg)\n\nA Lua library for working with HTML and CSS. It can do HTML and CSS\nsanitization using a whitelist, along with general HTML parsing and\ntransformation. It also includes a query-selector syntax (similar to jQuery)\nfor scanning HTML.\n\n**Security**: This library is used to parse and verify a large amount of\nuntrusted user generated content on production commercial applications. It is\nactively monitored and updated for security issues. If you uncover any\nvulnerabilities contact `leafot@gmail.com` with subject `web_sanitize security\nvulnerability`. Do not publicly post security vulnerabilities on the issue\ntracker. When in doubt, send private email.\n\n\n* [HTML Sanitizer](#html-sanitizer)\n* [HTML Parser/Scanner](#html-parser)\n\nExamples:\n\n```lua\nlocal web_sanitize = require \"web_sanitize\"\n\n-- Fix bad HTML\nprint(web_sanitize.sanitize_html(\n  [[\u003ch1 onload=\"alert('XSS')\"\u003e This HTML Stinks \u003cScRiPt\u003ealert('hacked!')]]))\n--  \u003ch1\u003e This HTML Stinks \u0026lt;ScRiPt\u0026gt;alert(\u0026#x27;hacked!\u0026#x27;)\u003c/h1\u003e\n\n-- Sanitize CSS properties\nprint(web_sanitize.sanitize_style([[border: 12px; behavior:url(script.htc);]]))\n--  border: 12px\n\n-- Extract text from HTML\nprint(web_sanitize.extract_text([[\u003cdiv class=\"cool\"\u003eHello \u003cb\u003eworld\u003c/b\u003e!\u003c/div\u003e]]))\n-- Hello world!\n\n```\n\n## Install\n\n```bash\n$ luarocks install web_sanitize\n```\n\n## HTML Sanitizer\n\n`web_sanitize` tries to preserve the structure of the input as best as possible\nwhile sanitizing bad content. For HTML, tags that don't match a whitelist are\nescaped and written as plain text. Attributes of accepted tags that don't match\nthe whitelist are stripped from the output. You can instruct the sanitizer to\ninsert your own attributes to tags as well, for example, all `a` tags will have\na `rel=\"nofollow\"` attribute inserted by default configuration.\n\nThe sanitizer does not aim to be a complete HTML parser, but instead its goal\nis to accept a strict subset of HTML and reject everything else. If you want a\nmore complete HTML parser you can use the [HTML Parser/Scanner](#html-parser)\ndescribed below.\n\nAny unclosed tags that are approved will be closed at the end of the string.\nThis means it's safe to put sanitized HTML anywhere in an existing document\nwithout worrying about breaking the structure.\n\nIf an outer tag is prematurely closed before the inner tags, the inner\ntags will automatically be closed.\n\n* `\u003cli\u003e\u003cb\u003eHello World` → `\u003cli\u003e\u003cb\u003eHello World\u003c/b\u003e\u003c/li\u003e`\n* `\u003cli\u003e\u003cb\u003eHello World\u003c/li\u003e` → `\u003cli\u003e\u003cb\u003eHello World\u003c/b\u003e\u003c/li\u003e`\n\n## CSS Sanitizer\n\nA whitelist is used to define an approved set of CSS properties, along with a\ntype specification for what kinds of parameters they can take. If a CSS\nproperty is not in the whitelist, or does not match the type specification then\nit is stripped from the output. Any valid CSS properties are preserved though.\n\n## Function Reference\n\n```lua\nlocal web_sanitize = require(\"web_sanitize\")\n```\n\n### HTML\n\n#### `sanitize_html(unsafe_html)`\n\nSanitizes HTML using the whitelist located in `require \"web_sanitize.whitelist\"`\n\n```lua\nlocal safe_html = web_sanitize.sanitize_html(\"hi\u003cscript\u003ealert('hi')\u003c/script\u003e\")\n```\n\n#### `extract_text(unsafe_html)`\n\nExtracts just the textual content of unsafe HTML, returning valid HTML. No HTML\ntags will be present in the output. There may be HTML escape sequences present\nif the text contains any characters that might be interpreted as part of an\nHTML tag (eg. a `\u003c`).\n\n```lua\nlocal text = web_sanitize.extract_text(\"\u003cdiv\u003ehello \u003cb\u003eworld\u003c/b\u003e\u003c/div\u003e\")\n```\n\n### CSS\n\n#### `sanitize_style(unsafe_style_attributes)`\n\nSanitizes a list of CSS attributes (not an entire CSS file). Suitable for use\non the `style` HTML attribute.\n\n```lua\nlocal safe_style = web_sanitize.sanitize_style(\"border: 12px; behavior:url(script.htc);\")\n```\n\n## Configuring The Whitelist\n\n### HTML\n\nThe default whitelist provides a basic set of authorized HTML tags. Feel free\nto submit a pull request if there is something missing.\n\nGet access to the whitelist like so:\n\n```lua\nlocal whitelist = require \"web_sanitize.whitelist\"\n```\n\nIts recommended to make clone of the whitelist before modifying it:\n\n\n```lua\nlocal my_whitelist = whitelist:clone()\n\n-- let iframes be used in sanitzied HTML\nmy_whitelist.tags.iframe = {\n  width = true,\n  height = true,\n  frameborder = true,\n  src = true,\n}\n```\n\nIn order to use your modified whitelist you'll need to instantiate a\n`Sanitizer` object directly:\n\n\n```lua\nlocal Sanitizer = require(\"web_sanitize.html\").Sanitizer\nlocal sanitize_html = Sanitizer({whitelist = my_whitelist})\n\nsanitize_html([[\u003ciframe src=\"http://leafo.net\" frameborder=\"0\"\u003e\u003c/iframe\u003e]])\n```\n\nSee [`whitelist.moon`][2] for the default whitelist.\n\nThe whitelist table has three important fields:\n\n* `tags`: a table of valid tag names and their corresponding valid attributes\n* `add_attributes`: a table of attributes that should be inserted into a tag\n* `self_closing`: a set of tags that don't need a closing tag\n\nThe `tags` field specifies tags that are possible to be used, and the\nattributes that can be on them.\n\nA attribute whitelist can be either a boolean, or a function. If it's a\nfunction then it takes as arguments `value`, `attribute_name`, and `tag_name`.\nIf this function returns a string, then that value is used to replace the value\nof the attribute. If it returns any other value, it's coerced into a boolean\nand used to determine if the attribute should be kept.\n\nFor example, you could include `sanitize_style` in the HTML whitelist to allow\na subset of CSS:\n\n```lua\nlocal web_sanitize = require \"web_sanitize\"\nlocal whitelist = require(\"web_sanitize.whitelist\"):clone()\n\n-- set the default style attribute handler\nwhitelist[1].style = function(value)\n  return web_sanitize.sanitize_style(value)\nend\n```\n\nThe `add_attributes` can be used to inject additional attributes onto a tag.\nThe default whitelist contains a rule to make all links `nofollow`:\n\n```lua\nwhitelist.add_attributes = {\n  a =  {\n    rel = \"nofollow\"\n  }\n}\n```\n\nAs an example, you could change this to make it also add a `rel=noopener` as well:\n\n```lua\nwhitelist.add_attributes.a = {\n  rel = \"nofollow noopener\"\n}\n```\n\nAdd attributes can also also take a function to dynamically insert attribute\nvalues based on the other attributes in the tag. The function will receive one\nargument, a table of the parsed attributes. These are the attributes as written\nin the original HTML, it does not reflect any changes the sanitizer will make\nto the element. The function can return `nil` or `false` to make no changes, or\nreturn a string to add an attribute containing that value.\n\nHere's how you might add `nofollow noopener` to every link except those from a\ncertain domain:\n\n\n```lua\nwhitelist.add_attributes.a = {\n  rel = function(attr)\n    for tuple in ipairs(attr) do\n      if tuple[1]:lower() == \"href\" and not tuple[2]:match(\"^https?://leafo%.net/\") then\n        return \"nofollow noopener\"\n      end\n    end\n  end\n}\n```\n\nThe format of the attributes argument has all attributes stored as `{name,\nvalue}` tuples in the numeric indices, and the normalized (lowercase) attribute\nname and value stored in the hash table component. The hash table component is\nadded for convenience. For security critical testing you should iterate over\nthe numerical components to make sure that no attributes are being shadowed.\n\nThis HTML will create the following object as the argument:\n\n    \u003ca href=\"http://leafo.net\" HREF=\"http://itch.io\" onclick=\"alert('hi')\"\u003e\u003c/a\u003e\n\n```lua\n{\n  {\"href\", \"http://leafo.net\"},\n  {\"HREF\", \"http://itch.io\"},\n  {\"onclick\", \"alert('hi')\"},\n  href = \"http://itch.io\",\n  onclick = \"alert('hi')\",\n}\n```\n\n### CSS\n\nSimilar to above, see [`css_whitelist.moon`][6]\n\n## Customizing The Sanitizer\n\nIn addition to the `whitelist` option shown above, the sanitizer has the following options:\n\n* `strip_tags` - *boolean* Remove unknown tags from output entirely, instead of escapting them as text default: `false`\n* `strip_comments` - *boolean* Remove comments from output instead of escaping them, default: `false`\n\n```lua\nlocal Sanitizer = require(\"web_sanitize.html\").Sanitizer\nlocal sanitize_html = Sanitizer({strip_tags = true})\n\nsanitize_html([[\u003cbody\u003eHello \u003cstrong\u003eworld\u003c/strong\u003e\u003c/body\u003e]]) --\u003e Hello \u003cstrong\u003eworld\u003c/strong\u003e\n```\n\n## HTML Parser\n\nThe HTML parser lets you extract data from, and manipulate HTML using a minimal\nDocument Object Model and [query selector\nsyntax](https://developer.mozilla.org/en-US/docs/Web/API/Document/querySelector).\nIt attempts to follow the [HTML\nspec](https://html.spec.whatwg.org/multipage/syntax.html) as best it can.\n\nThe scanner provies a lower level interface that lets you iterate through each\nnode in an HTML document using a callback. For each node parsed in the HTML\ndocument a callback is called with an object representing the structure of the\ndocument at the current location. This node supports mutating the document when\nusing the `replace_html` function.\n\n```lua\nlocal scanner = require(\"web_sanitize.query.scan_html\")\n```\n\nHere are a few things to be aware of when using the scanner:\n\n* The scanner performs a *depth first* scan: the callback is issued on a node after the closing tag for that node has been parsed.\n* Any markup in *raw text* elements like `script`, `style`, `title` is ignored (unless it's the appropriate closing tag)\n* Any markup inside HTML comments or CDATA sections is ignored\n* Unclosed tags are considered dangling tags and will be processed after the parser reaches the end of the input (With the exception of void tags (eg. img, hr) which are always automatically closed regardless of if self closing (`\u003ca/\u003e`)  syntax is used.)\n* Attributes automatically have their values HTML entities decoded (eg. \u0026amp;amp; becomes \u0026amp;)\n* All edits are performed after the scan has taken place, not during the scan. If you alter the content of a node's inner or outer html then scanner will not see these changes in the current iteration. Additionally, making edits to a parent node's content will shadow any edits you've made to child nodes. You can work around these limitations by doing multi-pass replacements.\n* Text nodes (when enabled) will treat CDATA tags as separate text nodes. Get the content with `inner_html` method. (`outer_html` will return the CDATA tag)\n\nThe scanner exposes two primitive object types: `NodeStack` and `HTMLNode`\n\n`NodeStack` has the following methods and properties:\n\n* `stack[n]` - get the nth item in the stack (as an `HTMLNode`)\n* `stack:current()` - return the `HTMLNode` on top of the stack\n* `stack:is(query)` - return `true` if the stack matches the query selector\n\n`HTMLNode` has the following methods and properties:\n\n* `node.tag` - the name of the tag (eg. `\"div\"`, `\"span\"`). Will be `\"\"` for text nodes, and `\"cdata\"` for `CDATA` text nodes\n* `node.type` - set to `\"text_node\"` for text nodes, `nil` otherwise\n* `node.num` - integer representing what nth child position this node is (NOTE: this number changes depending on if text nodes are enabled or not)\n* `node.self_closing` - `true` if the tag uses self closing syntax (`\u003ca /\u003e`), `nil` otherwise\n* `node.attr` - A table of attributes if the tag has attributes, `nil` otherwise. See attribute table format below\n* `node:outer_html()` - get HTML fragment as string of the entire tag, including the opening and closing tag\n* `node:inner_html()` - get HTML fragment as string of the content of the tag, excludes opening and closing tag\n* `node:inner_text()` - get a string of the textual content inside the tag (effectively `extract_text(inner_html)`, using `extract_text` function described above)\n* `node:replace_outer_html(html_text)` **(`replace_html` only)** - Replaces the entire tag with HTML fragment `html_text`\n* `node:replace_inner_html(html_text)` **(`replace_html` only)** - Replaces the inside of the tag with HTML fragment `html_text`\n* `node:replace_attributes(tbl)` **(`replace_html` only)** - Replaces all attributes on the tag with the table of attributes\n* `node:update_attributes(tbl)` **(`replace_html` only)** - Merges a table of attributes with the current attributes, overwriting any of the existing ones (including duplicates) with the ones provided\n\nThe node attributes are stored in a table with both array and hash table\nelements. The hash table elements have their keys normalized to lowercase and\nonly hold the most recent value.\n\n```lua\n-- \u003cdiv first=\"value\" first=\"\u0026quot;hey\u0026quot;\" Hello=world readonly\u003e\u003c/div\u003e\nnode.attr = {\n  { \"first\", \"value\"},\n  { \"first\", '\"hey\"'},\n  { \"Hello\", \"world\"},\n  { \"readonly\" },\n\n  first = '\"hey\"',\n  hello = \"world\",\n  readonly = true\n}\n````\n\nWhen updating or replacing attributes, the same table syntax is used as the\nargument, but it will write duplicates if you have a single attribute repeated\nin both the table and array format.\n\n#### `scan_html(html_text, callback, opts)`\n\nScans over all nodes in the `html_text`, calling the `callback` function for\neach node found. The callback receives one argument, an instance of a\n`NodeStack`. A node stack is a Lua table holding an array of all the nodes in\nthe stack, with the top most node being the current one.\n\nEach node in the node stack is an instance of `HTMLNode`. In `scan_html` the\nnode is read-only, and can be used to get the properties and content of the\nnode (eg. `inner_html`, `inner_text`, `outer_html`).\n\nHere's how you might get the `href` and text of every `a` element in in an HTML string:\n\n```lua\nlocal scanner = require(\"web_sanitize.query.scan_html\")\n\nlocal my_html = [[\n\u003cul\u003e\n  \u003cli\u003e\u003ca href=\"http://leafo.net\"\u003eMy homepage\u003c/a\u003e\n  \u003cli\u003e\u003ca href=\"http://github.com/leafo\"\u003eMy GitHub\u003c/a\u003e\n\u003c/ul\u003e\n\n\u003cp\u003eAlso, don't forget to check out \u003ca href=\"http://itch.io\"\u003eitch.io\u003c/a\u003e.\u003c/p\u003e\n]]\n\nlocal urls = {}\n\nscanner.scan_html(my_html, function(stack)\n  if stack:is(\"a\") then\n    local node = stack:current()\n\n    table.insert(urls, {\n      url = node.attr.href,\n      text = node:inner_text()\n    })\n  end\nend)\n```\n\nYou can optionally enable *text nodes* to have the parser emit a node for each\nchunk of text. This includes text that is nested within a tag. Set `text_nodes`\nto `true` in an options table passed as the last argument.\n\nYou can get the content of the node by calling either `inner_html` or\n`outer_html`.\n\n#### `replace_html(html_text, callback, opts)`\n\nWorks the same as `scan_html`, except each node in the stack is capable of\nbeing mutated using the `replace_attributes`, `update_attributes`,\n`replace_inner_html`, `replace_outer_html` methods.\n\nHere's how you might convert all `a` tags that don't match a certain URL\npattern to plain text:\n\n```lua\nscanner.replace_html(my_html, function(stack)\n  if stack:is(\"a\") then\n    local node = stack:current()\n    local url = node.attr.href or \"\"\n\n    if not url:match(\"^https?://leafo%.net\") then\n      node:replace_outer_html(node:inner_html())\n    end\n  end\nend)\n```\n\nText nodes can also be manipulated by `replace_html`. You can enable text nodes\nby setting `text_nodes` to `true` in a options table passed as the last\nargument. The text node can be updated by either calling `replace_outer_html`\nor `replace_inner_html`.\n\nFor example, you might want to write a script that converts links to `a` tags,\nbut not when they're already inside an `a` tag:\n\n\n```lua\nlocal my_html = [[\n  text that should be a link: http://leafo.net\n  and a link that should be unchanged: \u003ca href=\"https://itch.io\"\u003ehttps://itch.io\u003c/a\u003e\n]]\n\nlocal formatted_html = replace_html(my_html, function(stack)\n  local node = stack:current()\n  if node.tag == \"\" and not stack:is(\"a *, a\") then\n    node:replace_outer_html(node:outer_html():gsub(\"(https?://[^ \u003c\\\"']+)\", \"\u003ca href=\\\"%1\\\"\u003e%1\u003c/a\u003e\"))\n  end\nend, { text_nodes = true })\n\nprint(formatted_html)\n```\n\n## Fast?\n\nIt should be pretty fast. It's powered by the wonderful library [LPeg][3].\nThere is only one string concatenation on each call to `sanitize_html`. 200kb\nof HTML can be sanitized in 0.01 seconds on my computer. This makes it\nunnecessary in most circumstances to sanitize ahead of time when rendering\nuntrusted HTML.\n\n## Tests\n\nRequires [Busted][4] and [MoonScript][5].\n\n```bash\nmake test\n```\n\n## Changelog\n\n**Jan 25  2021** - 1.1.0\n\n* Update text extractor\n  * Add option for extracting as html or as plain text\n  * Add option for removing non-printable characters\n  * Add HTML entitiy translation when extracting as plain text\n  * Whitespace trimming and normalization is utf8 whitespace aware\n* Minor updates to CSS default whitelist for border attributes\n\n**Jan 15  2020** - 1.0.0\n\n* **Important** \u0026mdash; Added fix where specially crafted HTML could sanitize to HTML with an unclosed tag\n* Fixed whitespace preservation for text around self closing tags\n* Updated CSS whitelist\n* Added cache to `parse_query` for huge speedups when doing repeat matches\n\n**Sep 08  2017** - 0.6.1\n\n* Add support for callback to `add_attributes` for dynamically injecting an attribute into a tag\n\n**May 09  2016** - 0.5.0\n\nSanitizer\n\n* Add `clone` method to whitelist\n* Add `Sanitizer` constructor, with `whitelist` and `strip_tags` options\n* Add `Extractor` constructor\n\nScanner\n\n* `replace_attributes` works correctly with boolean attributes, eg. `{allowfullscreen = true}`\n* `replace_attributes` works correctly with void tags\n* `replace_attributes` only manipulates text of opening tag, not entire tag, preventing any double edit bugs\n* attribute order is preserved when mutating attributes with `replace_attributes`\n* the `attr` object has array positional items with the names of the attributes in the order they were encountered\n\n**Dec 27  2015** - 0.4.0\n\n* Add query and scan implementations\n* Add html rewrite interface, attribute rewriter\n* Support Lua 5.2 and above (removed references to global `unpack`)\n\n*Note: all of these things are undocumented at the moment, sorry. Check the specs for examples*\n\n**Feb 1 2015** - 0.3.0\n\n* Add `sanitize_css`\n* Let attribute values be overwritten from whitelist\n* `extract_text` collapses extra whitespace\n\n**Oct 6 2014** - 0.2.0\n\n* Add `extract_text` function\n* Correctly parse protocol relative URLS in `href`/`src` attributes\n* Correctly parse attributes that have no value\n\n**April 16 2014** - 0.0.1\n\n* Initial release\n\n# Contact\n\nAuthor: Leaf Corcoran (leafo) ([@moonscript](http://twitter.com/moonscript))\nLicense: MIT Copyright (c) 2020 Leaf Corcoran\nEmail: leafot@gmail.com\nHomepage: \u003chttp://leafo.net\u003e\n\n\n [1]: https://github.com/leafo/web_sanitize/blob/master/test.moon\n [2]: https://github.com/leafo/web_sanitize/blob/master/web_sanitize/whitelist.moon\n [3]: http://www.inf.puc-rio.br/~roberto/lpeg/\n [4]: http://olivinelabs.com/busted/\n [5]: http://moonscript.org\n [6]: https://github.com/leafo/web_sanitize/blob/master/web_sanitize/css_whitelist.moon\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fleafo%2Fweb_sanitize","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fleafo%2Fweb_sanitize","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fleafo%2Fweb_sanitize/lists"}