Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/reatlat/webchronicle
A web archiving tool that allows you to capture and explore snapshots of webpages over time—like the Wayback Machine, but as your own personal Time Machine.
https://github.com/reatlat/webchronicle
11ty eleventy eleventy-website timemachine
Last synced: 9 days ago
JSON representation
A web archiving tool that allows you to capture and explore snapshots of webpages over time—like the Wayback Machine, but as your own personal Time Machine.
- Host: GitHub
- URL: https://github.com/reatlat/webchronicle
- Owner: reatlat
- License: mit
- Created: 2024-11-01T11:35:22.000Z (3 months ago)
- Default Branch: main
- Last Pushed: 2025-01-02T06:32:18.000Z (about 1 month ago)
- Last Synced: 2025-01-28T17:06:48.140Z (13 days ago)
- Topics: 11ty, eleventy, eleventy-website, timemachine
- Language: JavaScript
- Homepage: https://webchronicle.dev
- Size: 213 MB
- Stars: 2
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- Funding: .github/FUNDING.yml
- License: LICENSE
Awesome Lists containing this project
README
# webChronicle
A web archiving tool that allows you to capture and explore snapshots of webpages over time—like the Wayback Machine, but as your own personal Time Machine.Live Demo: [webChronicle](https://webchronicle.dev/)
- Requires Node.js 20.x or later
- Each snapshot is stored in a separate folder with the following structure:
- `YYYY-MM-DDTHH-MM-SS` (timestamp)
- `example.com` (domain)## Run Locally
After cloning the repository, install the dependencies:
```bash
npm install
npm run scraper
npm run start
```## Usage
1. Update `webchronicle.config.js` with your configuration:
```javascript
module.exports = {
...
urls: [
'https://example.com',
'https://example.org',
],
urlFilter: (url) => {
return url.includes('example.com') || url.includes('example.org');
},
...
};
```
Full configuration options are available in the [Options](https://github.com/website-scraper/node-website-scraper?tab=readme-ov-file#options) section.
2. Run the scraper:
```bash
npm run scraper
```
3. Commit the changes to your repository:
```bash
git add ./scrapped-websites
git commit -m "Scrapped websites"
git push
```
4. Deploy the project to your preferred platform or run it locally [http://localhost:8080](http://localhost:8080):
```bash
npm run start
```
5. Explore the snapshots of the webpages over time.
6. Enjoy! 🎉## Deployment
You can deploy the project to Netlify by clicking the button below:
[![Netlify Deploy](https://www.netlify.com/img/deploy/button.svg)](https://app.netlify.com/start/deploy?repository=https://github.com/reatlat/webchronicle)
You can also deploy the project to Vercel by clicking the button below:
[![Vercel Deploy](https://vercel.com/button)](https://vercel.com/import/project?template=https://github.com/reatlat/webchronicle)
You also may deploy this project to other platforms like Heroku, AWS, Cloudflare Pages or Google Cloud.
## Contributing
If you notice an issue, feel free to [open an issue](https://github.com/reatlat/webchronicle/issues).
1. Fork this repo
2. Clone `git clone [email protected]:reatlat/webchronicle.git`
3. Create your feature branch `git checkout -b my-new-feature`
4. Commit your changes `git commit -am 'Add some feature'`
5. Push to the branch `git push origin my-new-feature`
6. Create a new Pull Request
7. Sit back and enjoy your cup of coffee ☕️## Credits
"Special thanks to James Dancer for the inspiration behind the name—your idea was spot on!"
Logo deign by Tatiana Zappa.
Build with Eleventy and website-scraper.
## License
This project is open source and available under the [MIT License](LICENSE).