{"id":13583493,"url":"https://github.com/website-scraper/node-website-scraper","last_synced_at":"2025-05-13T18:14:13.909Z","repository":{"id":20395060,"uuid":"23670920","full_name":"website-scraper/node-website-scraper","owner":"website-scraper","description":"Download website to local directory (including all css, images, js, etc.)","archived":false,"fork":false,"pushed_at":"2025-03-31T16:23:16.000Z","size":800,"stargazers_count":1607,"open_issues_count":4,"forks_count":284,"subscribers_count":45,"default_branch":"master","last_synced_at":"2025-05-04T04:03:13.269Z","etag":null,"topics":["hacktoberfest","javascript","nodejs","scraper","website-scraper"],"latest_commit_sha":null,"homepage":"https://www.npmjs.org/package/website-scraper","language":"JavaScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/website-scraper.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null},"funding":{"github":"s0ph1e","patreon":"s0ph1e","issuehunt":"website-scraper"}},"created_at":"2014-09-04T16:48:20.000Z","updated_at":"2025-04-28T03:30:11.000Z","dependencies_parsed_at":"2024-09-30T10:43:02.441Z","dependency_job_id":"5148b9cd-25b8-4ae5-bfc4-0fd861e306d8","html_url":"https://github.com/website-scraper/node-website-scraper","commit_stats":{"total_commits":490,"total_committers":25,"mean_commits":19.6,"dds":0.5938775510204082,"last_synced_commit":"bdd0d514bf1341899353ca4d72043eccf3258f94"},"previous_names":["s0ph1e/node-website-scraper"],"tags_count":59,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/website-scraper%2Fnode-website-scraper","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/website-scraper%2Fnode-website-scraper/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/website-scraper%2Fnode-website-scraper/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/website-scraper%2Fnode-website-scraper/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/website-scraper","download_url":"https://codeload.github.com/website-scraper/node-website-scraper/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254000885,"owners_count":21997443,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["hacktoberfest","javascript","nodejs","scraper","website-scraper"],"created_at":"2024-08-01T15:03:31.184Z","updated_at":"2025-05-13T18:14:13.867Z","avatar_url":"https://github.com/website-scraper.png","language":"JavaScript","readme":"[![Version](https://img.shields.io/npm/v/website-scraper.svg?style=flat)](https://www.npmjs.org/package/website-scraper)\n[![Downloads](https://img.shields.io/npm/dm/website-scraper.svg?style=flat)](https://www.npmjs.org/package/website-scraper)\n[![Node.js CI](https://github.com/website-scraper/node-website-scraper/actions/workflows/node.js.yml/badge.svg)](https://github.com/website-scraper/node-website-scraper)\n[![Test Coverage](https://codeclimate.com/github/website-scraper/node-website-scraper/badges/coverage.svg)](https://codeclimate.com/github/website-scraper/node-website-scraper/coverage)\n[![Coverage Status](https://coveralls.io/repos/github/website-scraper/node-website-scraper/badge.svg?branch=master)](https://coveralls.io/github/website-scraper/node-website-scraper?branch=master)\n\n\n# website-scraper\n\n[Options](#usage) | [Plugins](#plugins) | [Log and debug](#log-and-debug) | [Frequently Asked Questions](https://github.com/website-scraper/node-website-scraper/blob/master/docs/FAQ.md) | [Contributing](https://github.com/website-scraper/node-website-scraper/blob/master/CONTRIBUTING.md) | [Code of Conduct](https://github.com/website-scraper/node-website-scraper/blob/master/CODE_OF_CONDUCT.md)\n\nDownload the website to the local directory (including all css, images, js, etc.)\n\n**Note:** by default dynamic websites (where content is loaded by js) may be saved not correctly because `website-scraper` doesn't execute js, it only parses http responses for html and css files. If you need to download a dynamic website take a look at [website-scraper-puppeteer](https://github.com/website-scraper/website-scraper-puppeteer).\n\nThis module is an Open Source Software maintained by one developer in free time. If you want to thank the author of this module you can use [GitHub Sponsors](https://github.com/sponsors/s0ph1e) or [Patreon](https://www.patreon.com/s0ph1e).\n\n## Requirements\n* nodejs version \u003e= 18.17\n* website-scraper since v5 is pure ESM (it doesn't work with CommonJS), [read more in release v5.0.0 docs](https://github.com/website-scraper/node-website-scraper/releases/tag/v5.0.0)\n\n## Installation\n```\nnpm install website-scraper\n```\n\n## Usage\n```javascript\nimport scrape from 'website-scraper'; // only as ESM, no CommonJS\nconst options = {\n  urls: ['http://nodejs.org/'],\n  directory: '/path/to/save/'\n};\n\n// with async/await\nconst result = await scrape(options);\n\n// with promise\nscrape(options).then((result) =\u003e {});\n```\n\n## options\n* [urls](#urls) - urls to download, *required*\n* [directory](#directory) - path to save files, *required*\n* [sources](#sources) - selects which resources should be downloaded\n* [recursive](#recursive) - follow hyperlinks in html files\n* [maxRecursiveDepth](#maxrecursivedepth) - maximum depth for hyperlinks\n* [maxDepth](#maxdepth) - maximum depth for all dependencies\n* [request](#request) - custom options for http module [got](https://github.com/sindresorhus/got#options)\n* [subdirectories](#subdirectories) - subdirectories for file extensions\n* [defaultFilename](#defaultfilename) - filename for index page\n* [prettifyUrls](#prettifyurls) - prettify urls\n* [ignoreErrors](#ignoreerrors) - whether to ignore errors on resource downloading\n* [urlFilter](#urlfilter) - skip some urls\n* [filenameGenerator](#filenamegenerator) - generate filename for downloaded resource\n* [requestConcurrency](#requestconcurrency) - set maximum concurrent requests\n* [plugins](#plugins) - plugins, allow to customize filenames, request options, response handling, saving to storage, etc.\n\nDefault options you can find in [lib/config/defaults.js](https://github.com/website-scraper/node-website-scraper/blob/master/lib/config/defaults.js) or get them using \n```javascript\nimport defaultOptions from 'website-scraper/defaultOptions';\n```\n\n#### urls\nArray of objects which contain urls to download and filenames for them. **_Required_**.\n```javascript\nscrape({\n  urls: [\n    'http://nodejs.org/',\t// Will be saved with default filename 'index.html'\n    {url: 'http://nodejs.org/about', filename: 'about.html'},\n    {url: 'http://blog.nodejs.org/', filename: 'blog.html'}\n  ],\n  directory: '/path/to/save'\n});\n```\n\n#### directory\nString, absolute path to directory where downloaded files will be saved. Directory should not exist. It will be created by scraper. **_Required_**.\nHow to download website to existing directory and why it's not supported by default - check [here](https://github.com/website-scraper/node-website-scraper/blob/master/docs/FAQ.md#q-im-getting-directory-exists-error-can-i-save-website-to-existing-directory).\n\n#### sources\nArray of objects to download, specifies selectors and attribute values to select files for downloading. By default scraper tries to download all possible resources. Scraper uses cheerio to select html elements so `selector` can be any [selector that cheerio supports](https://github.com/cheeriojs/cheerio#selectors).\n```javascript\n// Downloading images, css files and scripts\nscrape({\n  urls: ['http://nodejs.org/'],\n  directory: '/path/to/save',\n  sources: [\n    {selector: 'img', attr: 'src'},\n    {selector: 'link[rel=\"stylesheet\"]', attr: 'href'},\n    {selector: 'script', attr: 'src'}\n  ]\n});\n```\n\n#### recursive\nBoolean, if `true` scraper will follow hyperlinks in html files. Don't forget to set `maxRecursiveDepth` to avoid infinite downloading. Defaults to `false`.\n\n#### maxRecursiveDepth\nPositive number, maximum allowed depth for hyperlinks. Other dependencies will be saved regardless of their depth. Defaults to `null` - no maximum recursive depth set.\n\n#### maxDepth\nPositive number, maximum allowed depth for all dependencies. Defaults to `null` - no maximum depth set.\nIn most of cases you need [maxRecursiveDepth](#maxRecursiveDepth) instead of this option.\n\nThe difference between [maxRecursiveDepth](#maxRecursiveDepth) and [maxDepth](#maxDepth) is that\n* maxDepth is for all type of resources, so if you have\n  \u003e maxDepth=1 AND html (depth 0) ⟶ html (depth 1) ⟶ img (depth 2)\n\n  last image will be filtered out by depth\n\n* maxRecursiveDepth is only for html resources, so if you have\n  \u003e maxRecursiveDepth=1 AND html (depth 0) ⟶ html (depth 1) ⟶ img (depth 2)\n\n  only html resources with depth 2 will be filtered out, last image will be downloaded\n\n#### request\nObject, custom options for http module [got](https://github.com/sindresorhus/got#options) which is used inside website-scraper. Allows to set retries, cookies, userAgent, encoding, etc.\n```javascript\n// use same request options for all resources\nscrape({\n  urls: ['http://example.com/'],\n  directory: '/path/to/save',\n  request: {\n    headers: {\n      'User-Agent': 'Mozilla/5.0 (Linux; Android 4.2.1; en-us; Nexus 4 Build/JOP40D) AppleWebKit/535.19 (KHTML, like Gecko) Chrome/18.0.1025.166 Mobile Safari/535.19'\n    }\n  }\n});\n```\n\n#### subdirectories\nArray of objects, specifies subdirectories for file extensions. If `null` all files will be saved to `directory`.\n```javascript\n/* Separate files into directories:\n  - `img` for .jpg, .png, .svg (full path `/path/to/save/img`)\n  - `js` for .js (full path `/path/to/save/js`)\n  - `css` for .css (full path `/path/to/save/css`)\n*/\nscrape({\n  urls: ['http://example.com'],\n  directory: '/path/to/save',\n  subdirectories: [\n    {directory: 'img', extensions: ['.jpg', '.png', '.svg']},\n    {directory: 'js', extensions: ['.js']},\n    {directory: 'css', extensions: ['.css']}\n  ]\n});\n```\n\n#### defaultFilename\nString, filename for index page. Defaults to `index.html`.\n\n#### prettifyUrls\nBoolean, whether urls should be 'prettified', by having the `defaultFilename` removed. Defaults to `false`.\n\n#### ignoreErrors\nBoolean, if `true` scraper will continue downloading resources after error occurred, if `false` - scraper will finish process and return error. Defaults to `false`.\n\n#### urlFilter\nFunction which is called for each url to check whether it should be scraped. Defaults to `null` - no url filter will be applied.\n```javascript\n// Links to other websites are filtered out by the urlFilter\nscrape({\n  urls: ['http://example.com/'],\n  urlFilter: function(url) {\n    return url.indexOf('http://example.com') === 0;\n  },\n  directory: '/path/to/save'\n});\n```\n\n#### filenameGenerator\nString (name of the bundled filenameGenerator). Filename generator determines path in file system where the resource will be saved.\n\n###### byType (default)\nWhen the `byType` filenameGenerator is used the downloaded files are saved by extension (as defined by the `subdirectories` setting) or directly in the `directory` folder, if no subdirectory is specified for the specific extension.\n\n###### bySiteStructure\nWhen the `bySiteStructure` filenameGenerator is used the downloaded files are saved in `directory` using same structure as on the website:\n- `/` =\u003e `DIRECTORY/example.com/index.html`\n- `/about` =\u003e `DIRECTORY/example.com/about/index.html`\n- `//cdn.example.com/resources/jquery.min.js` =\u003e `DIRECTORY/cdn.example.com/resources/jquery.min.js`\n\n```javascript\nscrape({\n  urls: ['http://example.com/'],\n  urlFilter: (url) =\u003e url.startsWith('http://example.com'), // Filter links to other websites\n  recursive: true,\n  maxRecursiveDepth: 10,\n  filenameGenerator: 'bySiteStructure',\n  directory: '/path/to/save'\n});\n```\n\n#### requestConcurrency\nNumber, maximum amount of concurrent requests. Defaults to `Infinity`.\n\n\n#### plugins\n\nPlugins allow to extend scraper behaviour\n\n* [Built-in plugins](#built-in-plugins)\n* [Existing plugins](#existing-plugins)\n* [Create plugin](#create-plugin)\n* Create action\n    * [beforeStart](#beforestart)\n    * [afterFinish](#afterfinish)\n    * [error](#error)\n    * [beforeRequest](#beforerequest)\n    * [afterResponse](#afterresponse)\n    * [onResourceSaved](#onresourcesaved)\n    * [onResourceError](#onresourceerror)\n    * [saveResource](#saveresource)\n    * [generateFilename](#generatefilename)\n    * [getReference](#getreference)\n\n##### Built-in plugins\nScraper has built-in plugins which are used by default if not overwritten with custom plugins. You can find them in [lib/plugins](lib/plugins) directory. Thease plugins are intended for internal use but can be coppied if the behaviour of the plugins needs to be extended / changed.\n\n##### Existing plugins\n* [website-scraper-puppeteer](https://github.com/website-scraper/website-scraper-puppeteer) - download dynamic (rendered with js) websites using puppeteer\n* [website-scraper-existing-directory](https://github.com/website-scraper/website-scraper-existing-directory) - save files to existing directory\n\n##### Create plugin\n\nPlugin is object with `.apply` method, can be used to change scraper behavior.\n\n`.apply` method takes one argument - `registerAction` function which allows to add handlers for different actions. Action handlers are functions that are called by scraper on different stages of downloading website. For example `generateFilename` is called to generate filename for resource based on its url, `onResourceError` is called when error occured during requesting/handling/saving resource.\n\nYou can add multiple plugins which register multiple actions. Plugins will be applied in order they were added to options.\nAll actions should be regular or async functions. Scraper will call actions of specific type in order they were added and use result (if supported by action type) from last action call.\n\nList of supported actions with detailed descriptions and examples you can find below.\n```javascript\nclass MyPlugin {\n\tapply(registerAction) {\n\t\tregisterAction('beforeStart', async ({options}) =\u003e {});\n\t\tregisterAction('afterFinish', async () =\u003e {});\n\t\tregisterAction('error', async ({error}) =\u003e {console.error(error)});\n\t\tregisterAction('beforeRequest', async ({resource, requestOptions}) =\u003e ({requestOptions}));\n\t\tregisterAction('afterResponse', async ({response}) =\u003e response.body);\n\t\tregisterAction('onResourceSaved', ({resource}) =\u003e {});\n\t\tregisterAction('onResourceError', ({resource, error}) =\u003e {});\n\t\tregisterAction('saveResource', async ({resource}) =\u003e {});\n\t\tregisterAction('generateFilename', async ({resource}) =\u003e {})\n\t\tregisterAction('getReference', async ({resource, parentResource, originalReference}) =\u003e {})\n\t}\n}\n\nscrape({\n  urls: ['http://example.com/'],\n  directory: '/path/to/save',\n  plugins: [ new MyPlugin() ]\n});\n```\n\n##### beforeStart\nAction `beforeStart` is called before downloading is started. It can be used to initialize something needed for other actions.\n\nParameters - object which includes:\n* options - scraper normalized options object passed to scrape function\n* utils - scraper [utils](https://github.com/website-scraper/node-website-scraper/blob/master/lib/utils/index.js)\n\n```javascript\nregisterAction('beforeStart', async ({options, utils}) =\u003e {});\n```\n\n##### afterFinish\nAction afterFinish is called after all resources downloaded or error occurred. Good place to shut down/close something initialized and used in other actions.\n\nNo parameters.\n```javascript\nregisterAction('afterFinish', async () =\u003e {});\n```\n\n##### error\nAction error is called when error occurred.\n\nParameters - object which includes:\n* error - Error object\n```javascript\nregisterAction('error', async ({error}) =\u003e {console.log(error)});\n```\n\n##### beforeRequest\nAction beforeRequest is called before requesting resource. You can use it to customize request options per resource, for example if you want to use different encodings for different resource types or add something to querystring.\n\nParameters - object which includes:\n* resource - [Resource](https://github.com/website-scraper/node-website-scraper/blob/master/lib/resource.js) object\n* requestOptions - default options for http module [got](https://github.com/sindresorhus/got#options) or options returned by previous beforeRequest action call\n\nShould return object which includes custom options for [got](https://github.com/sindresorhus/got#options) module.\nIf multiple actions `beforeRequest` added - scraper will use `requestOptions` from last one.\n```javascript\n// Add ?myParam=123 to querystring for resource with url 'http://example.com'\nregisterAction('beforeRequest', async ({resource, requestOptions}) =\u003e {\n\tif (resource.getUrl() === 'http://example.com') {\n\t\treturn {requestOptions: extend(requestOptions, {searchParams: {myParam: 123}})};\n\t}\n\treturn {requestOptions};\n});\n\n// Add random delays between requests\nregisterAction('beforeRequest', async ({resource, requestOptions}) =\u003e {\n\tconst time = Math.round(Math.random() * 10000);\n\tawait new Promise((resolve) =\u003e setTimeout(resolve, time));\n\treturn {requestOptions};\n});\n```\n\n##### afterResponse\nAction afterResponse is called after each response, it allows to customize resource or reject its saving.\n\nParameters - object which includes:\n* response - response object from http module [got](https://github.com/sindresorhus/got#response)\n\nReturn resolved `Promise` with:\n  * object if the resource should be saved, object should contain next properties:\n    * `body` (string, required)\n    * `encoding` (`binary` or `utf8`) is used to save the file, binary is used by default.\n    * `metadata` (object) - everything you want to save for this resource (like headers, original text, timestamps, etc.), scraper will not use this field at all, it is only for the result\n  * or null if the resource should be skipped\n\nIf multiple actions `afterResponse` are added - the scraper will use the result from the last one.\n```javascript\n// Do not save resources that responded with 404 not found status code\nregisterAction('afterResponse', ({response}) =\u003e {\n\tif (response.statusCode === 404) {\n\t\treturn null;\n\t} else {\n\t\treturn {\n\t\t\tbody: response.body,\n                        encoding: 'utf8',\n\t\t\tmetadata: {\n\t\t\t\theaders: response.headers,\n\t\t\t\tsomeOtherData: [ 1, 2, 3 ]\n\t\t\t}\n\t\t}\n\t}\n});\n```\n\n##### onResourceSaved\nAction onResourceSaved is called each time after a resource is saved (to file system or other storage with 'saveResource' action).\n\nParameters- object which includes:\n* resource - [Resource](https://github.com/website-scraper/node-website-scraper/blob/master/lib/resource.js) object\n\nScraper ignores the result returned from this action and does not wait until it is resolved\n```javascript\nregisterAction('onResourceSaved', ({resource}) =\u003e console.log(`Resource ${resource.url} saved!`));\n```\n\n##### onResourceError\nAction onResourceError is called each time when resource's downloading/handling/saving to was failed\n\nParameters- object which includes:\n* resource - [Resource](https://github.com/website-scraper/node-website-scraper/blob/master/lib/resource.js) object\n* error - Error object\n\nScraper ignores result returned from this action and does not wait until it is resolved\n```javascript\nregisterAction('onResourceError', ({resource, error}) =\u003e console.log(`Resource ${resource.url} has error ${error}`));\n```\n\n##### generateFilename\nAction generateFilename is called to determine path in file system where the resource will be saved.\n\nParameters - object which includes:\n* resource - [Resource](https://github.com/website-scraper/node-website-scraper/blob/master/lib/resource.js) object\n* responseData - object returned from afterResponse action, contains `url`, `mimeType`, `body`, `metadata` properties\n\nShould return object which includes:\n* filename - String, relative to `directory` path for specified resource\n\nIf multiple actions `generateFilename` added - scraper will use result from last one.\n\nDefault plugins which generate filenames: [byType](https://github.com/website-scraper/node-website-scraper/blob/master/lib/plugins/generate-filenamy-by-type-plugin.js), [bySiteStructure](https://github.com/website-scraper/node-website-scraper/blob/master/lib/plugins/generate-filenamy-by-site-structure-plugin.js)\n```javascript\n// Generate random filename\nconst crypto = require('crypto');\nregisterAction('generateFilename', ({resource}) =\u003e {\n  return {filename: crypto.randomBytes(20).toString('hex')};\n});\n```\n\n##### getReference\nAction getReference is called to retrieve reference to resource for parent resource. Can be used to customize reference to resource, for example, update missing resource (which was not loaded) with absolute url. By default reference is relative path from `parentResource` to `resource` (see [GetRelativePathReferencePlugin](https://github.com/website-scraper/node-website-scraper/blob/master/lib/plugins/get-relative-path-reference-plugin.js)).\n\nParameters - object which includes:\n* resource - [Resource](https://github.com/website-scraper/node-website-scraper/blob/master/lib/resource.js) object\n* parentResource - [Resource](https://github.com/website-scraper/node-website-scraper/blob/master/lib/resource.js) object\n* originalReference - string, original reference to `resource` in `parentResource`\n\nShould return object which includes:\n* reference - string or null, reference to `resource` for `parentResource`. If you don't want to update reference - return null\n\nIf multiple actions `getReference` added - scraper will use result from last one.\n```javascript\n// Use relative filenames for saved resources and absolute urls for missing\nregisterAction('getReference', ({resource, parentResource, originalReference}) =\u003e {\n  if (!resource) {\n    return {reference: parentResource.url + originalReference}\n  }\n  return {reference: utils.getRelativePath(parentResource.filename, resource.filename)};\n});\n```\n\n\n##### saveResource\nAction saveResource is called to save file to some storage. Use it to save files where you need: to dropbox, amazon S3, existing directory, etc. By default all files are saved in local file system to new directory passed in `directory` option (see [SaveResourceToFileSystemPlugin](https://github.com/website-scraper/node-website-scraper/blob/master/lib/plugins/save-resource-to-fs-plugin.js)).\n\nParameters - object which includes:\n* resource - [Resource](https://github.com/website-scraper/node-website-scraper/blob/master/lib/resource.js) object\n\nIf multiple actions `saveResource` added - resource will be saved to multiple storages.\n```javascript\nregisterAction('saveResource', async ({resource}) =\u003e {\n  const filename = resource.getFilename();\n  const text = resource.getText();\n  await saveItSomewhere(filename, text);\n});\n```\n\n## result\nArray of [Resource](https://github.com/website-scraper/node-website-scraper/blob/master/lib/resource.js) objects containing:\n- `url`: url of loaded page\n- `filename`: filename where page was saved (relative to `directory`)\n- `children`: array of children Resources\n\n## Log and debug\nThis module uses [debug](https://github.com/visionmedia/debug) to log events. To enable logs you should use environment variable `DEBUG`.\nThe next command will log everything from website-scraper\n```bash\nexport DEBUG=website-scraper*; node app.js\n```\n\nModule has different loggers for levels: `website-scraper:error`, `website-scraper:warn`, `website-scraper:info`, `website-scraper:debug`, `website-scraper:log`. Please read [debug](https://github.com/visionmedia/debug) documentation to find how to include/exclude specific loggers.\n","funding_links":["https://github.com/sponsors/s0ph1e","https://patreon.com/s0ph1e","https://issuehunt.io/r/website-scraper","https://www.patreon.com/s0ph1e"],"categories":["JavaScript","Crawler","HarmonyOS"],"sub_categories":["Windows Manager"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fwebsite-scraper%2Fnode-website-scraper","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fwebsite-scraper%2Fnode-website-scraper","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fwebsite-scraper%2Fnode-website-scraper/lists"}