https://github.com/qiubits2007/xml-sitemap
Multi-domain XML sitemap generator with support for robots.txt, meta tags, email logging & search engine pinging
https://github.com/qiubits2007/xml-sitemap
crawler generator gzip multi-domain php8 robots-txt seo seotools sitemap-builder sitemap-generator sitemap-xml
Last synced: about 1 month ago
JSON representation
Multi-domain XML sitemap generator with support for robots.txt, meta tags, email logging & search engine pinging
- Host: GitHub
- URL: https://github.com/qiubits2007/xml-sitemap
- Owner: qiubits2007
- License: mit
- Created: 2025-03-27T08:08:30.000Z (7 months ago)
- Default Branch: main
- Last Pushed: 2025-04-25T05:20:47.000Z (6 months ago)
- Last Synced: 2025-04-25T06:27:13.354Z (6 months ago)
- Topics: crawler, generator, gzip, multi-domain, php8, robots-txt, seo, seotools, sitemap-builder, sitemap-generator, sitemap-xml
- Language: PHP
- Homepage:
- Size: 167 KB
- Stars: 2
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- Changelog: CHANGELOG.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
- Security: SECURITY.md
Awesome Lists containing this project
README
# πΊοΈ XML Sitemap Generator (PHP)
A powerful and customizable sitemap generator written in PHP (PHP 8+).
It crawls one or multiple domains, respects `robots.txt`, follows meta directives, supports resumable sessions, sends logs by email, and can even notify search engines when the sitemap is ready.
It is optimized for large websites and offers advanced crawl controls, meta/robots filtering, JSON/HTML export, and more.---
## β Features
- π Multi-domain support (comma-separated URLs)
- π Combined sitemap for all domains
- π Automatically creates multiple sitemap files if more than 50,000 URLs are found
- π§ Crawling depth control
- π `robots.txt` and `` handling
- π Resumable crawl via cache (optional)
- π£ `--resetcache` to force full crawl (new!)
- π£ `--resetlog` to delete old log files (new!)
- π§ Dynamic priority & changefreq rules (via config or patterns)
- π§Ή Pretty or single-line XML output
- π¦ GZIP compression (optional)
- π§ Log by email
- π Health check report
- π‘ Ping Google/Bing/Yandex
- π§ͺ Debug mode with detailed logs
- π Structured logging with timestamps and levels (`info`, `error`, `debug`, etc.)
- π Log export: JSON format + HTML report (`logs/crawl_log.json`, `logs/crawl_log.html`)
- π Visual crawl map generation (`crawl_graph.json`, `crawl_map.html`)
- π§ Flattened email reports with attached crawl logs
- π§ Customizable sender email via `--from`
- π Public base URL for sitemap/map references via `--publicbase`---
## π Requirements
- PHP 8.0 or newer
- `curl` and `dom` extensions enabled
- Write permissions to the script folder (for logs/cache/sitemaps)---
## βοΈ Usage (CLI)
```bash
php sitemap.php \
--url=https://yourdomain.com,https://blog.yourdomain.com \
--key=YOUR_SECRET_KEY \
[options]
```## π Usage (Browser)
```url
sitemap.php?url=https://yourdomain.com&key=YOUR_SECRET_KEY&gzip&prettyxml
```---
## π§© Options
| Option | Description |
|---------------------|--------------------------------------------------------------------------|
| `--url=` | Comma-separated domain list to crawl (required) |
| `--key=` | Secret key to authorize script execution (required) |
| `--output=` | Output path for the sitemap file |
| `--depth=` | Max crawl depth (default: 3) |
| `--gzip` | Export sitemap as `.gz` |
| `--prettyxml` | Human-readable XML output |
| `--resume` | Resume from last crawl using `cache/visited.json` |
| `--resetcache` | Force fresh crawl by deleting the cache (NEW) |
| `--resetlog` | Clear previous crawl logs before start (NEW) |
| `--filters` | Enable external filtering from `filter_config.json` |
| `--graph` | Export visual crawl map (JSON + interactive HTML) |
| `--priorityrules` | Enable dynamic `` based on URL patterns |
| `--changefreqrules` | Enable dynamic `` based on URL patterns |
| `--ignoremeta` | Ignore `` directives |
| `--respectrobots` | Obey rules in `robots.txt` |
| `--email=` | Send crawl log to this email |
| `--ping` | Notify search engines after sitemap generation (β οΈ Google/Bing ping deprecated) |
| `--threads=` | Number of concurrent crawl threads (default: 10) |
| `--agent=` | Set a custom User-Agent |
| `--splitbysite` | Generate one sitemap per domain and build sitemap_index.xml to link them |
| `--graphmap` | Generate crawl map as JSON and interactive HTML |
| `--publicbase=` | Public base URL for HTML links (e.g., https://example.com/sitemaps) |
| `--from=` | Sender address for email reports |
| `--debug` | Output detailed log info for debugging |---
## π Output Files
- `sitemap.xml` or `sitemap-*.xml`
- `sitemap.xml.gz` (optional)
- `sitemap_index.xml` (if split)
- `cache/visited.json` β stores crawl progress (used with `--resume`)
- `logs/crawl_log.txt` β full crawl log
- `logs/crawl_log.json` β Structured log as JSON
- `logs/crawl_log.html` β Visual HTML report of the crawl log
- `logs/crawl_report_*.txt` β emailed attachment
- `logs/health_report.txt` β summary of crawl (errors, speed, blocks)
- `crawl_graph.json` β Graph structure for visualization
- `crawl_map.html` β Interactive crawl map---
## βοΈ External Filter Config
Create a `config/filter.json` to define your own include/exclude patterns and dynamic rules:
```json
{
"excludeExtensions": ["jpg", "png", "zip", "docx"],
"excludePatterns": ["*/private/*", "debug"],
"includeOnlyPatterns": ["blog", "news"],
"priorityPatterns": {
"high": ["blog", "news"],
"low": ["impressum", "privacy"]
},
"changefreqPatterns": {
"daily": ["blog", "news{
"excludeExtensions": ["jpg", "png", "docx", "zip"],
"excludePatterns": [],
"includeOnlyPatterns": [],
"priorityPatterns": {
"high": [
"news",
"blog",
"offers"
],
"low": [
"terms-and-conditions",
"legal-notice",
"privacy-policy"
]
},
"changefreqPatterns": {
"daily": [
"news",
"blog",
"offers"
],
"monthly": [
"terms-and-conditions",
"legal-notice",
"privacy-policy"
]
}
}"],
"monthly": ["impressum", "agb"]
}
}
```Activate with:
```bash
--filters --priorityrules --changefreqrules
```---
## π¬ Ping Support
With `--ping` enabled, the script will notify:
- Yandex: `https://webmaster.yandex.com/ping`
As of 2023/2024:
- β **Google** and **Bing** ping endpoints are deprecated (410 Gone)
- β Use `robots.txt` with a `Sitemap:` entry
- β Optionally submit in Webmaster Tools---
## π Security
The script **requires a secret key** (`--key=` or `key=`) to run.
Set it inside the script:```php
$authorized_hash = 'YOUR_SECRET_KEY';
```---
## π€ Email Log
Send crawl reports to your inbox with:
```bash
--email=you@yourdomain.com
```Your server must support the `mail()` function.
---
## π§ͺ Debugging
Enable `--debug` to log everything:
- Pattern matches
- Skipped URLs
- Meta robots blocking
- Robots.txt interpretation
- Response times
- Log file resets---
## Sitemap Splitting
If more than **50,000 URLs** are crawled (the limit of a single sitemap file per [sitemaps.org spec](https://www.sitemaps.org/protocol.html)),
the script will automatically create multiple sitemap files:- `sitemap-1.xml`, `sitemap-2.xml`, ...
- Or `domain-a-1.xml`, `domain-a-2.xml`, ... if `--splitbysite` is active
- These are automatically referenced from a `sitemap_index.xml`No configuration is needed β the split is automatic.
---
### How Split-by-Site Works
When using `--splitbysite`, the crawler will:
1. Create a separate sitemap file for each domain (e.g., `/sitemaps/domain1.xml`, `/sitemaps/domain2.xml`)
2. Automatically generate a `sitemap_index.xml` file in the root directory
3. Ping search engines (Google, Bing, Yandex) with the `sitemap_index.xml` URL instead of individual sitemap filesThis is useful when crawling multiple domains in a single run.
---
## Crawl Map Visualization
If you enable `--graph`, the crawler will export:
- `graph.json` β link structure as raw data
- `crawl_map.html` β interactive map powered by D3.jsYou can explore your site structure visually, zoom in/out, drag nodes, and inspect links.
Useful for spotting crawl traps, dead ends, and structure gaps.π Tip: For large sites, open the HTML file in Chrome or Firefox.
---
## π Example robots.txt
```
User-agent: *
Disallow:Sitemap: https://yourdomain.com/sitemap.xml
```---
## π License
MIT License
Feel free to fork, modify, or contribute!---
## π€ Author
Built by Gilles Dumont (Qiubits SARL)
Contributions and feedback welcome.