Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/acusti/wbdl
crawl and archive a website and all its pages as plain HTML
https://github.com/acusti/wbdl
Last synced: about 1 month ago
JSON representation
crawl and archive a website and all its pages as plain HTML
- Host: GitHub
- URL: https://github.com/acusti/wbdl
- Owner: acusti
- License: unlicense
- Created: 2023-11-29T21:14:21.000Z (12 months ago)
- Default Branch: main
- Last Pushed: 2023-12-21T02:11:57.000Z (11 months ago)
- Last Synced: 2024-09-25T16:59:03.369Z (about 2 months ago)
- Language: JavaScript
- Size: 21.5 KB
- Stars: 0
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# wbdl
crawl and archive a website and all its pages as plain HTML files organized
in directories to match the URL structure of the website.the simplest intended usage is as a shell script:
```
npx wbdl https://websitetoarchive.net
```
this will print out the path to the folder containing the website archive that is created on your filesystem. you can also have the utility open up the resulting folder via the `--open` CLI flag:
```
npx wbdl --open https://another.website.org
```the utility can also be installed as a node.js package and invoked programatically. to install via npm:
```
npm install --save wbdl
```
or yarn:
```
yarn add wbdl
```then to import and use it:
```
import { wbdl } from 'wbdl';const pathToArchive = await wbdl(urlString);
```here’s the jsdoc signature for the `wbdl` function (using typescript’s jsdoc type annotations):
```js
/**
* Takes a URL, downloads its HTML, and crawls it for any links to other pages
* on the same domain. It then repeats the process for any link it finds. Each
* page is stored as an index.html file in a directory structure based on the
* hierarchy of their URLs. Lastly, it compresses the website into a ZIP file
* and returns the path to the generated archive.
*
* @param {string} url The initial URL to crawl
* @returns {Promise} Path to the generated ZIP file
*/
```