Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/wangyihang/subdomain-crawler
A program for collecting subdomains of a list of given second-level domains (SLD)
https://github.com/wangyihang/subdomain-crawler
dns go penetration subdomain subdomain-enumeration subdomain-finder subdomain-scanner
Last synced: 2 months ago
JSON representation
A program for collecting subdomains of a list of given second-level domains (SLD)
- Host: GitHub
- URL: https://github.com/wangyihang/subdomain-crawler
- Owner: WangYihang
- License: mit
- Created: 2023-06-28T08:10:19.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2023-09-05T05:57:43.000Z (over 1 year ago)
- Last Synced: 2024-02-22T19:33:03.425Z (11 months ago)
- Topics: dns, go, penetration, subdomain, subdomain-enumeration, subdomain-finder, subdomain-scanner
- Language: Go
- Homepage:
- Size: 2.44 MB
- Stars: 12
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Subdomain Crawler
The program aims to help you collect subdomains of a list of given second-level domains (SLD).
![](assets/demo.gif)
## Installation
* Option 1: Download from [GitHub Releases](https://github.com/WangYihang/Subdomain-Crawler/releases/latest) directly (Recommended)
* Option 2: Go Install```bash
$ go install github.com/WangYihang/Subdomain-Crawler/cmd/subdomain-crawler@latest
```## Usage
1. Edit input file `input.txt`
```bash
$ head input.txt
tsinghua.edu.cn
pku.edu.cn
fudan.edu.cn
sjtu.edu.cn
zju.edu.cn
```2. Run the program
```bash
$ subdomain-crawler --help
Usage:
subdomain-crawler [OPTIONS]Application Options:
-i, --input-file= The input file (default: input.txt)
-o, --output-folder= The output folder (default: output)
-t, --timeout= Timeout of each HTTP request (in seconds) (default: 4)
-n, --num-workers= Number of workers (default: 32)
-d, --debug Enable debug mode
-v, --version VersionHelp Options:
-h, --help Show this help message$ subdomain-crawler
```3. Check out the result in `output/` folder.
```bash
$ head output/*
```