Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/kgretzky/dcrawl
Simple, but smart, multi-threaded web crawler for randomly gathering huge lists of unique domain names.
https://github.com/kgretzky/dcrawl
Last synced: 1 day ago
JSON representation
Simple, but smart, multi-threaded web crawler for randomly gathering huge lists of unique domain names.
- Host: GitHub
- URL: https://github.com/kgretzky/dcrawl
- Owner: kgretzky
- License: mit
- Created: 2017-08-14T15:24:52.000Z (about 7 years ago)
- Default Branch: master
- Last Pushed: 2019-07-14T09:42:02.000Z (over 5 years ago)
- Last Synced: 2024-08-02T01:26:07.756Z (3 months ago)
- Language: Go
- Size: 1.6 MB
- Stars: 511
- Watchers: 29
- Forks: 92
- Open Issues: 4
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-repositories - kgretzky/dcrawl - Simple, but smart, multi-threaded web crawler for randomly gathering huge lists of unique domain names. (Go)
README
# dcrawl
dcrawl is a simple, but smart, multi-threaded web crawler for randomly gathering huge lists of unique domain names.
[![baby-gopher](https://raw.githubusercontent.com/drnic/babygopher-site/gh-pages/images/babygopher-badge.png)](http://www.babygopher.org)
![demo](https://raw.githubusercontent.com/kgretzky/dcrawl/master/img/dcrawl.gif)
## How it works?
dcrawl takes one site URL as input and detects all `` links in the site's body. Each found link is put into the queue. Successively, each queued link is crawled in the same way, branching out to more URLs found in links on each site's body.
How **smart crawling** works:
* Branching out only to predefined number of links found per one hostname.
* Maximum number of allowed different hostnames per one domain *(avoids subdomain crawling hell e.g. blogspot.com)*.
* Can be restarted with same list of domains - last saved domains are added to the URL queue.
* Crawls only sites that return *text/html* Content-Type in HEAD response.
* Retrieves site body of maximum 1MB size.
* Does not save inaccessible domains.## How to run?
```
go build dcrawl.go
./dcrawl -url http://wired.com -out ~/domain_lists/domains1.txt -t 8
```## Usage
```
___ __
__| _/________________ __ _ _| |
/ __ |/ ___\_ __ \__ \\ \/ \/ / |
/ /_/ \ \___| | \// __ \\ /| |__
\____ |\___ >__| (____ /\/\_/ |____/
\/ \/ \/ v.1.0usage: dcrawl -url URL -out OUTPUT_FILE -t THREADS
-ms int
maximum different subdomains for one domain (def. 10) (default 10)
-mu int
maximum number of links to spider per hostname (def. 5) (default 5)
-out string
output file to save hostnames to
-t int
number of concurrent threads (def. 8) (default 8)
-url string
URL to start scraping from
-v bool
verbose (default false)
```## License
dcrawl was made by [Kuba Gretzky](https://twitter.com/mrgretzky) from [breakdev.org](https://breakdev.org) and released under the MIT license.