Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/th3-c0der/web-crawler
A simple WebCrawler for exploring and downloading content from web pages within a given domain/url.
https://github.com/th3-c0der/web-crawler
th3-c0der th3-coder th3c0der th3coder tool tools web-tool webcrawl webcrawler webcrawlers webcrawling
Last synced: about 5 hours ago
JSON representation
A simple WebCrawler for exploring and downloading content from web pages within a given domain/url.
- Host: GitHub
- URL: https://github.com/th3-c0der/web-crawler
- Owner: Th3-C0der
- License: mit
- Created: 2024-01-26T10:41:47.000Z (10 months ago)
- Default Branch: main
- Last Pushed: 2024-02-17T02:51:05.000Z (9 months ago)
- Last Synced: 2024-02-17T03:30:31.090Z (9 months ago)
- Topics: th3-c0der, th3-coder, th3c0der, th3coder, tool, tools, web-tool, webcrawl, webcrawler, webcrawlers, webcrawling
- Language: HTML
- Homepage: https://web-crawler-opal.vercel.app
- Size: 44.9 KB
- Stars: 3
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
## A Simple WebCrawler Made In Python for exploring and downloading content from web pages within a given Domain/URL.
[![Typing SVG](https://readme-typing-svg.demolab.com?font=Rubik+Glitch&pause=1000&color=00FF00&random=false&width=435&lines=WebCrawler+By+%5BTh3-C0der%5D)](https://th3-c0der.github.io)## About Tool:
- This Tool Crawls The Given URL/Domain And Collects The HTML File Of Each WebPage And Compresses Them Into Zip Archive For Downloading.
- My First Original Tool ^_^## INSTALLATION :
* `apt update -y`
* `apt upgrade -y`
* `pkg install python -y`
* `pkg install git`
* `git clone https://github.com/Th3-C0der/Web-Crawler`
* `ls`
* `cd Web-Crawler`
* `pip install -r requirements.txt`## RUN!:
* `cd Web-Crawler`
* `python main.py`
* Open This Url On Your Browser`http://127.0.0.1:5000`## UPDATE :
* To Update Script →`python update.py`