An open API service indexing awesome lists of open source software.

https://github.com/qiubits2007/xml-sitemap

Multi-domain XML sitemap generator with support for robots.txt, meta tags, email logging & search engine pinging
https://github.com/qiubits2007/xml-sitemap

crawler generator gzip multi-domain php8 robots-txt seo seotools sitemap-builder sitemap-generator sitemap-xml

Last synced: about 1 month ago
JSON representation

Multi-domain XML sitemap generator with support for robots.txt, meta tags, email logging & search engine pinging

Awesome Lists containing this project

README

          

# πŸ—ΊοΈ XML Sitemap Generator (PHP)

A powerful and customizable sitemap generator written in PHP (PHP 8+).
It crawls one or multiple domains, respects `robots.txt`, follows meta directives, supports resumable sessions, sends logs by email, and can even notify search engines when the sitemap is ready.
It is optimized for large websites and offers advanced crawl controls, meta/robots filtering, JSON/HTML export, and more.

---

## βœ… Features

- πŸ”— Multi-domain support (comma-separated URLs)
- πŸ“‘ Combined sitemap for all domains
- πŸ“‘ Automatically creates multiple sitemap files if more than 50,000 URLs are found
- 🧭 Crawling depth control
- πŸ” `robots.txt` and `` handling
- πŸ” Resumable crawl via cache (optional)
- πŸ’£ `--resetcache` to force full crawl (new!)
- πŸ’£ `--resetlog` to delete old log files (new!)
- 🧠 Dynamic priority & changefreq rules (via config or patterns)
- 🧹 Pretty or single-line XML output
- πŸ“¦ GZIP compression (optional)
- πŸ“§ Log by email
- πŸ›  Health check report
- πŸ“‘ Ping Google/Bing/Yandex
- πŸ§ͺ Debug mode with detailed logs
- πŸ“‘ Structured logging with timestamps and levels (`info`, `error`, `debug`, etc.)
- πŸ“‘ Log export: JSON format + HTML report (`logs/crawl_log.json`, `logs/crawl_log.html`)
- πŸ“‘ Visual crawl map generation (`crawl_graph.json`, `crawl_map.html`)
- πŸ“§ Flattened email reports with attached crawl logs
- πŸ“§ Customizable sender email via `--from`
- πŸ“‘ Public base URL for sitemap/map references via `--publicbase`

---

## πŸš€ Requirements

- PHP 8.0 or newer
- `curl` and `dom` extensions enabled
- Write permissions to the script folder (for logs/cache/sitemaps)

---

## βš™οΈ Usage (CLI)

```bash
php sitemap.php \
--url=https://yourdomain.com,https://blog.yourdomain.com \
--key=YOUR_SECRET_KEY \
[options]
```

## 🌐 Usage (Browser)

```url
sitemap.php?url=https://yourdomain.com&key=YOUR_SECRET_KEY&gzip&prettyxml
```

---

## 🧩 Options

| Option | Description |
|---------------------|--------------------------------------------------------------------------|
| `--url=` | Comma-separated domain list to crawl (required) |
| `--key=` | Secret key to authorize script execution (required) |
| `--output=` | Output path for the sitemap file |
| `--depth=` | Max crawl depth (default: 3) |
| `--gzip` | Export sitemap as `.gz` |
| `--prettyxml` | Human-readable XML output |
| `--resume` | Resume from last crawl using `cache/visited.json` |
| `--resetcache` | Force fresh crawl by deleting the cache (NEW) |
| `--resetlog` | Clear previous crawl logs before start (NEW) |
| `--filters` | Enable external filtering from `filter_config.json` |
| `--graph` | Export visual crawl map (JSON + interactive HTML) |
| `--priorityrules` | Enable dynamic `` based on URL patterns |
| `--changefreqrules` | Enable dynamic `` based on URL patterns |
| `--ignoremeta` | Ignore `` directives |
| `--respectrobots` | Obey rules in `robots.txt` |
| `--email=` | Send crawl log to this email |
| `--ping` | Notify search engines after sitemap generation (⚠️ Google/Bing ping deprecated) |
| `--threads=` | Number of concurrent crawl threads (default: 10) |
| `--agent=` | Set a custom User-Agent |
| `--splitbysite` | Generate one sitemap per domain and build sitemap_index.xml to link them |
| `--graphmap` | Generate crawl map as JSON and interactive HTML |
| `--publicbase=` | Public base URL for HTML links (e.g., https://example.com/sitemaps) |
| `--from=` | Sender address for email reports |
| `--debug` | Output detailed log info for debugging |

---

## πŸ“ Output Files

- `sitemap.xml` or `sitemap-*.xml`
- `sitemap.xml.gz` (optional)
- `sitemap_index.xml` (if split)
- `cache/visited.json` β†’ stores crawl progress (used with `--resume`)
- `logs/crawl_log.txt` β†’ full crawl log
- `logs/crawl_log.json` β†’ Structured log as JSON
- `logs/crawl_log.html` β†’ Visual HTML report of the crawl log
- `logs/crawl_report_*.txt` β†’ emailed attachment
- `logs/health_report.txt` β†’ summary of crawl (errors, speed, blocks)
- `crawl_graph.json` β†’ Graph structure for visualization
- `crawl_map.html` β†’ Interactive crawl map

---

## βš™οΈ External Filter Config

Create a `config/filter.json` to define your own include/exclude patterns and dynamic rules:

```json
{
"excludeExtensions": ["jpg", "png", "zip", "docx"],
"excludePatterns": ["*/private/*", "debug"],
"includeOnlyPatterns": ["blog", "news"],
"priorityPatterns": {
"high": ["blog", "news"],
"low": ["impressum", "privacy"]
},
"changefreqPatterns": {
"daily": ["blog", "news{
"excludeExtensions": ["jpg", "png", "docx", "zip"],
"excludePatterns": [],
"includeOnlyPatterns": [],
"priorityPatterns": {
"high": [
"news",
"blog",
"offers"
],
"low": [
"terms-and-conditions",
"legal-notice",
"privacy-policy"
]
},
"changefreqPatterns": {
"daily": [
"news",
"blog",
"offers"
],
"monthly": [
"terms-and-conditions",
"legal-notice",
"privacy-policy"
]
}
}"],
"monthly": ["impressum", "agb"]
}
}
```

Activate with:
```bash
--filters --priorityrules --changefreqrules
```

---

## πŸ“¬ Ping Support

With `--ping` enabled, the script will notify:

- Yandex: `https://webmaster.yandex.com/ping`

As of 2023/2024:
- ❌ **Google** and **Bing** ping endpoints are deprecated (410 Gone)
- βœ… Use `robots.txt` with a `Sitemap:` entry
- βœ… Optionally submit in Webmaster Tools

---

## πŸ” Security

The script **requires a secret key** (`--key=` or `key=`) to run.
Set it inside the script:

```php
$authorized_hash = 'YOUR_SECRET_KEY';
```

---

## πŸ“€ Email Log

Send crawl reports to your inbox with:

```bash
--email=you@yourdomain.com
```

Your server must support the `mail()` function.

---

## πŸ§ͺ Debugging

Enable `--debug` to log everything:
- Pattern matches
- Skipped URLs
- Meta robots blocking
- Robots.txt interpretation
- Response times
- Log file resets

---

## Sitemap Splitting

If more than **50,000 URLs** are crawled (the limit of a single sitemap file per [sitemaps.org spec](https://www.sitemaps.org/protocol.html)),
the script will automatically create multiple sitemap files:

- `sitemap-1.xml`, `sitemap-2.xml`, ...
- Or `domain-a-1.xml`, `domain-a-2.xml`, ... if `--splitbysite` is active
- These are automatically referenced from a `sitemap_index.xml`

No configuration is needed – the split is automatic.

---

### How Split-by-Site Works

When using `--splitbysite`, the crawler will:

1. Create a separate sitemap file for each domain (e.g., `/sitemaps/domain1.xml`, `/sitemaps/domain2.xml`)
2. Automatically generate a `sitemap_index.xml` file in the root directory
3. Ping search engines (Google, Bing, Yandex) with the `sitemap_index.xml` URL instead of individual sitemap files

This is useful when crawling multiple domains in a single run.

---

## Crawl Map Visualization

If you enable `--graph`, the crawler will export:

- `graph.json` – link structure as raw data
- `crawl_map.html` – interactive map powered by D3.js

You can explore your site structure visually, zoom in/out, drag nodes, and inspect links.
Useful for spotting crawl traps, dead ends, and structure gaps.

πŸ“ Tip: For large sites, open the HTML file in Chrome or Firefox.

---

## πŸ” Example robots.txt

```
User-agent: *
Disallow:

Sitemap: https://yourdomain.com/sitemap.xml
```

---

## πŸ“„ License

MIT License
Feel free to fork, modify, or contribute!

---

## πŸ‘€ Author

Built by Gilles Dumont (Qiubits SARL)
Contributions and feedback welcome.