{"id":49488614,"url":"https://github.com/soakes/s3mirror","last_synced_at":"2026-05-01T03:05:28.894Z","repository":{"id":327749405,"uuid":"1110666138","full_name":"soakes/s3mirror","owner":"soakes","description":"Production-ready Python utility for mirroring buckets and objects between S3-compatible endpoints with parallel transfers, comprehensive logging, and automation-friendly CLI.","archived":false,"fork":false,"pushed_at":"2026-05-01T01:00:27.000Z","size":25,"stargazers_count":7,"open_issues_count":0,"forks_count":0,"subscribers_count":0,"default_branch":"main","last_synced_at":"2026-05-01T02:20:58.437Z","etag":null,"topics":["automation","aws-s3","boto3","bucket-replication","ci-cd","cloud-storage","data-migration","devops","disaster-recovery","minio","multi-cloud","object-storage","python","python-cli","rustfs","s3","s3mirror","s3sync"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/soakes.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2025-12-05T14:38:01.000Z","updated_at":"2026-05-01T00:59:26.000Z","dependencies_parsed_at":null,"dependency_job_id":null,"html_url":"https://github.com/soakes/s3mirror","commit_stats":null,"previous_names":["soakes/s3-mirror"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/soakes/s3mirror","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/soakes%2Fs3mirror","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/soakes%2Fs3mirror/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/soakes%2Fs3mirror/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/soakes%2Fs3mirror/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/soakes","download_url":"https://codeload.github.com/soakes/s3mirror/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/soakes%2Fs3mirror/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":32483407,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-30T13:12:12.517Z","status":"online","status_checked_at":"2026-05-01T02:00:05.856Z","response_time":64,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["automation","aws-s3","boto3","bucket-replication","ci-cd","cloud-storage","data-migration","devops","disaster-recovery","minio","multi-cloud","object-storage","python","python-cli","rustfs","s3","s3mirror","s3sync"],"created_at":"2026-05-01T03:04:34.659Z","updated_at":"2026-05-01T03:05:28.862Z","avatar_url":"https://github.com/soakes.png","language":"Python","funding_links":[],"categories":[],"sub_categories":[],"readme":"# 🪞 S3 Mirror\n\n\u003e A Python utility for mirroring buckets and objects between S3-compatible endpoints.\n\n[![Lint](https://img.shields.io/github/actions/workflow/status/soakes/s3mirror/lint.yml?branch=main\u0026style=flat-square\u0026label=lint)](https://github.com/soakes/s3mirror/actions/workflows/lint.yml)\n[![Format](https://img.shields.io/github/actions/workflow/status/soakes/s3mirror/format.yml?branch=main\u0026style=flat-square\u0026label=format)](https://github.com/soakes/s3mirror/actions/workflows/format.yml)\n[![Python](https://img.shields.io/badge/Python-3.10%2B-3776AB.svg?style=flat-square\u0026logo=python\u0026logoColor=white)](https://www.python.org/)\n[![License](https://img.shields.io/badge/License-MIT-2EA043.svg?style=flat-square)](LICENSE)\n[![Issues](https://img.shields.io/github/issues/soakes/s3mirror?style=flat-square)](https://github.com/soakes/s3mirror/issues)\n\nBuilt for operators who need a small, inspectable, automation-friendly mirror\ntool for AWS S3, MinIO, Ceph, Wasabi, Backblaze B2, and other S3-compatible\nstorage systems.\n\n**Quick links:** [🚀 Quick Start](#quick-start) · [⚙️ Configuration](#configuration) · [🔄 How It Works](#how-it-works) · [🧪 Usage](#usage) · [🛡️ Safety Notes](#safety-notes) · [🤖 CI/CD](#cicd)\n\n\u003ca id=\"table-of-contents\"\u003e\u003c/a\u003e\n## 🧭 Table of Contents\n\n- [📖 Overview](#overview)\n- [✨ Capabilities](#capabilities)\n- [🔄 How It Works](#how-it-works)\n- [✅ Prerequisites](#prerequisites)\n- [🚀 Quick Start](#quick-start)\n- [⚙️ Configuration](#configuration)\n- [🧪 Usage](#usage)\n- [📋 Operational Behavior](#operational-behavior)\n- [📜 Logging](#logging)\n- [🛡️ Safety Notes](#safety-notes)\n- [🤖 CI/CD](#cicd)\n- [🗂️ Project Structure](#project-structure)\n- [🩺 Troubleshooting](#troubleshooting)\n- [🤝 Contributing](#contributing)\n- [📄 License](#license)\n\n---\n\n\u003ca id=\"overview\"\u003e\u003c/a\u003e\n## 📖 Overview\n\n`s3mirror` copies buckets and objects from one S3-compatible endpoint to another.\nIt is intentionally direct: one script, one config file, and clear logs that are\nusable from an interactive shell, cron, systemd timers, or CI jobs.\n\nIn normal operation it does five things:\n\n- loads YAML or JSON configuration\n- verifies source and destination S3 connectivity\n- discovers source buckets, excluding any configured bucket names\n- creates missing destination buckets\n- copies new or size-changed objects and optionally deletes destination-only objects\n\nThe project was created as an independent alternative to relying on vendor\nspecific mirror tooling. It uses `boto3`, so the behavior is easy to audit and\nthe same workflow can be pointed at most S3-compatible services.\n\n### First Run Checklist\n\n1. Create a dedicated source credential with read access to the buckets you want mirrored.\n2. Create a dedicated destination credential with bucket creation, upload, list, and delete permissions as needed.\n3. Start with `delete_extraneous: false` or use `--no-delete` for the first validation run.\n4. Run with `--debug` once to confirm bucket discovery, object counts, and transfer decisions.\n5. Enable deletion only after the copy-only behavior looks correct.\n6. For scheduled runs, use `--log-file` and alert on non-zero exit codes.\n\n---\n\n\u003ca id=\"capabilities\"\u003e\u003c/a\u003e\n## ✨ Capabilities\n\n- **S3-compatible endpoints**: works with AWS S3 and S3-compatible APIs such as MinIO, Ceph, Wasabi, and Backblaze B2.\n- **Whole-bucket mirroring**: discovers source buckets and mirrors each one to the destination.\n- **Destination bootstrap**: creates missing destination buckets before copying objects.\n- **Parallel transfers**: uses a configurable thread pool for copy and delete operations.\n- **Multipart uploads**: uses `boto3` transfer configuration for larger object uploads.\n- **Optional true mirror mode**: removes destination-only objects when deletion is enabled.\n- **Bucket exclusions**: skips configured buckets that should not be mirrored.\n- **YAML or JSON config**: keeps endpoint credentials, performance tuning, and sync behavior in one file.\n- **CLI overrides**: lets operators override worker count and deletion behavior at runtime.\n- **Automation-friendly logging**: supports normal, quiet, debug, and file logging modes.\n- **CI validation**: checks formatting and linting across supported Python versions.\n\n---\n\n\u003ca id=\"how-it-works\"\u003e\u003c/a\u003e\n## 🔄 How It Works\n\nAt runtime, `s3mirror` follows a simple reconciliation loop over every source\nbucket:\n\n```mermaid\nflowchart TD\n    A[Load YAML or JSON config] --\u003e B[Apply CLI overrides]\n    B --\u003e C[Create source and destination S3 clients]\n    C --\u003e D[Verify both endpoints with ListBuckets]\n    D --\u003e E[Discover source buckets]\n    E --\u003e F[Remove configured excluded buckets]\n    F --\u003e G[Process each bucket]\n    G --\u003e H{Destination bucket exists?}\n    H -- No --\u003e I[Create destination bucket]\n    H -- Yes --\u003e J[List source and destination objects]\n    I --\u003e J\n    J --\u003e K[Compare object keys and sizes]\n    K --\u003e L[Copy new or size-changed objects]\n    K --\u003e M{Deletion enabled?}\n    M -- Yes --\u003e N[Delete destination-only objects]\n    M -- No --\u003e O[Leave destination-only objects in place]\n    L --\u003e P[Print final summary and exit code]\n    N --\u003e P\n    O --\u003e P\n```\n\nThe deployment shape is deliberately small:\n\n```mermaid\nflowchart LR\n    subgraph Operator[\"Operator or scheduler\"]\n        CFG[config.yaml or config.json]\n        CLI[CLI flags]\n        LOG[console or log file]\n    end\n\n    subgraph Source[\"Source S3-compatible endpoint\"]\n        SB[(source buckets)]\n    end\n\n    subgraph Mirror[\"s3mirror.py\"]\n        V[verify connections]\n        D[diff keys and sizes]\n        T[parallel transfer workers]\n    end\n\n    subgraph Destination[\"Destination S3-compatible endpoint\"]\n        DB[(destination buckets)]\n    end\n\n    CFG --\u003e Mirror\n    CLI --\u003e Mirror\n    SB --\u003e V\n    V --\u003e D\n    D --\u003e T\n    T --\u003e DB\n    Mirror --\u003e LOG\n```\n\nObject decisions are based on object key presence and byte size:\n\n```mermaid\nflowchart TD\n    A[Source object] --\u003e B{Same key exists on destination?}\n    B -- No --\u003e C[Copy object]\n    B -- Yes --\u003e D{Same byte size?}\n    D -- No --\u003e C\n    D -- Yes --\u003e E[Skip object]\n    F[Destination-only object] --\u003e G{delete_extraneous enabled?}\n    G -- Yes --\u003e H[Delete from destination]\n    G -- No --\u003e I[Keep on destination]\n```\n\nImportant detail: this tool currently compares keys and sizes, not object\nchecksums or metadata. If two objects have the same key and size but different\ncontent, `s3mirror` will treat them as already synchronized.\n\n---\n\n\u003ca id=\"prerequisites\"\u003e\u003c/a\u003e\n## ✅ Prerequisites\n\n- Python `3.10+`\n- Network access to both S3-compatible endpoints\n- Source credentials with permission to list buckets and read objects\n- Destination credentials with permission to list buckets, create buckets, upload objects, and delete objects if mirror deletion is enabled\n- `pip` for installing Python dependencies\n\nThe runtime dependencies are listed in [`requirements.txt`](requirements.txt):\n\n- `boto3`\n- `urllib3`\n- `PyYAML`\n\n---\n\n\u003ca id=\"quick-start\"\u003e\u003c/a\u003e\n## 🚀 Quick Start\n\nClone the repository and create a virtual environment:\n\n```bash\ngit clone https://github.com/soakes/s3mirror.git\ncd s3mirror\npython3 -m venv .venv\nsource .venv/bin/activate\npip install -r requirements.txt\n```\n\nCreate a configuration file:\n\n```yaml\nsource:\n  endpoint_url: \"https://s3.source.example.com\"\n  aws_access_key_id: \"SOURCE_ACCESS_KEY\"\n  aws_secret_access_key: \"SOURCE_SECRET_KEY\"\n  region_name: \"us-east-1\"\n  verify_ssl: true\n\ndestination:\n  endpoint_url: \"https://s3.destination.example.com\"\n  aws_access_key_id: \"DEST_ACCESS_KEY\"\n  aws_secret_access_key: \"DEST_SECRET_KEY\"\n  region_name: \"us-east-1\"\n  verify_ssl: true\n\nperformance:\n  max_workers: 20\n  multipart_threshold: 8388608\n  multipart_chunksize: 8388608\n  max_concurrency: 10\n  max_pool_connections: 50\n\nsync:\n  delete_extraneous: false\n  exclude_buckets: []\n```\n\nRun a copy-only validation pass:\n\n```bash\npython3 s3mirror.py --config config.yaml --no-delete --debug\n```\n\nWhen the output looks correct, run with the deletion behavior from the config:\n\n```bash\npython3 s3mirror.py --config config.yaml --log-file /var/log/s3mirror.log\n```\n\n---\n\n\u003ca id=\"configuration\"\u003e\u003c/a\u003e\n## ⚙️ Configuration\n\n`s3mirror` accepts YAML and JSON configuration files. The top-level sections are:\n\n- `source`: connection settings for the source S3 endpoint\n- `destination`: connection settings for the destination S3 endpoint\n- `performance`: transfer and HTTP pool tuning\n- `sync`: mirror behavior\n\n### Example Configuration\n\n```yaml\nsource:\n  endpoint_url: \"https://s3.source.example.com\"\n  aws_access_key_id: \"SOURCE_ACCESS_KEY\"\n  aws_secret_access_key: \"SOURCE_SECRET_KEY\"\n  region_name: \"us-east-1\"\n  verify_ssl: true\n\ndestination:\n  endpoint_url: \"https://s3.destination.example.com\"\n  aws_access_key_id: \"DEST_ACCESS_KEY\"\n  aws_secret_access_key: \"DEST_SECRET_KEY\"\n  region_name: \"us-east-1\"\n  verify_ssl: true\n\nperformance:\n  max_workers: 20\n  multipart_threshold: 8388608\n  multipart_chunksize: 8388608\n  max_concurrency: 10\n  max_pool_connections: 50\n\nsync:\n  delete_extraneous: true\n  exclude_buckets:\n    - scratch-bucket\n    - temporary-exports\n```\n\n### Source and Destination\n\n| Key | Description |\n|-----|-------------|\n| `endpoint_url` | S3-compatible API endpoint URL. |\n| `aws_access_key_id` | Access key for the endpoint. |\n| `aws_secret_access_key` | Secret key for the endpoint. |\n| `region_name` | Region name passed to `boto3`. Many non-AWS services still expect a value. |\n| `verify_ssl` | Enables or disables TLS certificate verification. Use `false` only for trusted self-signed environments. |\n\n### Performance\n\n| Key | Default | Description |\n|-----|---------|-------------|\n| `max_workers` | `20` | Number of worker threads used for object copy and delete operations. |\n| `multipart_threshold` | `8388608` | Object size in bytes where multipart upload behavior starts. |\n| `multipart_chunksize` | `8388608` | Multipart chunk size in bytes. |\n| `max_concurrency` | `10` | Per-transfer concurrency passed to `boto3` transfer config. |\n| `max_pool_connections` | `50` | HTTP connection pool size for each S3 client. |\n\n### Sync\n\n| Key | Default | Description |\n|-----|---------|-------------|\n| `delete_extraneous` | `true` | Deletes destination objects that do not exist in the source. |\n| `exclude_buckets` | `[]` | Source bucket names to skip entirely. |\n\n`--workers` and `--no-delete` override the loaded configuration for a single\nrun. Use `--show-config` to inspect the effective configuration with secret keys\nredacted.\n\n```bash\npython3 s3mirror.py --config config.yaml --show-config\n```\n\n---\n\n\u003ca id=\"usage\"\u003e\u003c/a\u003e\n## 🧪 Usage\n\n### Basic Run\n\n```bash\npython3 s3mirror.py --config config.yaml\n```\n\n### Command-Line Flags\n\n```text\n-c, --config FILE\n    Configuration file path (.json or .yaml)\n\n-q, --quiet\n    Quiet mode. Console shows errors only.\n\n-d, --debug\n    Enable verbose debug output.\n\n-l, --log-file FILE\n    Write full debug logs to a file. Console stays quiet unless --debug is used.\n\n-w, --workers N\n    Override the configured parallel worker count.\n\n--no-delete\n    Do not delete destination-only objects, even if delete_extraneous is true.\n\n--show-config\n    Display the effective configuration with secret keys redacted and exit.\n\n--version\n    Print version information and exit.\n```\n\n### Examples\n\nRun with a custom worker count:\n\n```bash\npython3 s3mirror.py --config config.yaml --workers 40\n```\n\nRun safely without destination deletion:\n\n```bash\npython3 s3mirror.py --config config.yaml --no-delete\n```\n\nRun with detailed troubleshooting output:\n\n```bash\npython3 s3mirror.py --config config.yaml --debug\n```\n\nRun from cron with file logging:\n\n```cron\n0 2 * * * /path/to/s3mirror/.venv/bin/python /path/to/s3mirror/s3mirror.py --config /path/to/config.yaml --log-file /var/log/s3mirror.log --quiet\n```\n\nRun from a systemd timer or service by invoking the same Python command and\nusing the process exit code for alerting.\n\n---\n\n\u003ca id=\"operational-behavior\"\u003e\u003c/a\u003e\n## 📋 Operational Behavior\n\n| Area | Behavior |\n|------|----------|\n| Endpoint verification | Calls `ListBuckets` against both source and destination before syncing. |\n| Bucket discovery | Mirrors source buckets except names listed in `exclude_buckets`. |\n| Bucket creation | Creates missing destination buckets with the same bucket name. |\n| Object listing | Uses `list_objects_v2` pagination for source and destination buckets. |\n| Object comparison | Copies objects that are missing or whose byte size differs. |\n| Transfers | Streams from source with `get_object` and uploads to destination with `upload_fileobj`. |\n| Deletes | Deletes destination-only keys only when deletion is enabled. |\n| Retries | Uses botocore adaptive retries with `max_attempts` set to `3`. |\n| Addressing | Uses S3 path-style addressing. |\n| Exit code | Exits `0` when the run completes without counted errors, otherwise exits `1`. |\n\nStatistics printed at the end include buckets processed, buckets created,\nobjects copied, objects deleted, data transferred, average throughput, and error\ncount.\n\n---\n\n\u003ca id=\"logging\"\u003e\u003c/a\u003e\n## 📜 Logging\n\n`s3mirror` has logging modes for both humans and schedulers:\n\n| Mode | Console Output | File Output | Typical Use |\n|------|----------------|-------------|-------------|\n| Normal | Progress and summary | None | Interactive runs |\n| Debug | Verbose details with levels | None | Troubleshooting |\n| Quiet | Errors only | None | Minimal cron output |\n| File log | Errors only unless `--debug` is set | Full debug log with timestamps | Production automation |\n\nRecommended scheduled form:\n\n```bash\npython3 s3mirror.py \\\n  --config /etc/s3mirror.yaml \\\n  --log-file /var/log/s3mirror.log \\\n  --quiet\n```\n\nWhen a log file is configured, each run starts with a clear session header so\nthe file can be tailed or rotated by external tooling.\n\n---\n\n\u003ca id=\"safety-notes\"\u003e\u003c/a\u003e\n## 🛡️ Safety Notes\n\n`s3mirror` can delete data from the destination. Treat deletion as an operational\nchoice, not a default assumption.\n\n### Deletion Behavior\n\nWhen `delete_extraneous: true`, destination objects that are not present in the\nsource are deleted. This is useful for true mirror workflows, but it can remove\nobjects that were intentionally written directly to the destination.\n\nDisable deletion for a run:\n\n```bash\npython3 s3mirror.py --config config.yaml --no-delete\n```\n\nDisable deletion in config:\n\n```yaml\nsync:\n  delete_extraneous: false\n```\n\n### Change Detection\n\nThe current implementation compares object key and byte size. It does not\ncompare checksums, ETags, object metadata, tags, storage class, ACLs, retention\nsettings, or version history.\n\nThat makes the tool fast and simple, but it also means:\n\n- same-key, same-size objects are treated as equal\n- metadata-only changes are not mirrored\n- versioned bucket history is not replayed\n- destination bucket policy and lifecycle settings are not managed\n\n### Recommended Guardrails\n\n- Start with `--no-delete` until the object counts look right.\n- Use dedicated credentials with only the permissions needed for the workflow.\n- Exclude buckets that are temporary, test-only, or destination-specific.\n- Keep independent backups for critical data before enabling deletion.\n- Alert on non-zero exit codes and review the log file regularly.\n- Test new endpoints or credential changes in a non-production bucket first.\n\n---\n\n\u003ca id=\"cicd\"\u003e\u003c/a\u003e\n## 🤖 CI/CD\n\nGitHub Actions keeps the small codebase checked across supported Python\nversions.\n\n### Workflows\n\n- `Lint`\n  - runs on pull requests and manual dispatch\n  - tests Python `3.10`, `3.11`, `3.12`, and `3.13`\n  - installs runtime and lint dependencies\n  - runs `pylint`, `black --check`, and `isort --check-only`\n- `Auto-format`\n  - runs on pushes to `main` and `master`\n  - formats `s3mirror.py` with pinned Black and isort versions\n  - commits formatting changes back when needed\n- `Dependabot`\n  - checks Python dependencies weekly\n  - opens up to ten dependency update pull requests\n\n### Local Maintainer Commands\n\n```bash\npython3 -m pip install -r requirements.txt\npython3 -m pip install black isort pylint\nblack s3mirror.py\nisort s3mirror.py\npylint s3mirror.py\n```\n\n---\n\n\u003ca id=\"project-structure\"\u003e\u003c/a\u003e\n## 🗂️ Project Structure\n\n```text\ns3mirror/\n├── .github/\n│   ├── dependabot.yml\n│   └── workflows/\n│       ├── dependabot-auto-merge.yml\n│       ├── format.yml\n│       └── lint.yml\n├── .pylintrc\n├── LICENSE\n├── README.md\n├── requirements.txt\n└── s3mirror.py\n```\n\nThe repository keeps runtime behavior in [`s3mirror.py`](s3mirror.py), dependency\npins in [`requirements.txt`](requirements.txt), and CI policy under\n[`.github/`](.github/).\n\n---\n\n\u003ca id=\"troubleshooting\"\u003e\u003c/a\u003e\n## 🩺 Troubleshooting\n\n### Connection Verification Fails\n\nRun with `--debug` and check:\n\n- endpoint URL and scheme\n- access key and secret key\n- region name required by the provider\n- TLS behavior through `verify_ssl`\n- firewall, DNS, or proxy access between the runner and both endpoints\n\n### Destination Buckets Are Not Created\n\nConfirm the destination credential can create buckets. Some providers also\nrequire region-specific bucket creation behavior or pre-created buckets in\nrestricted accounts.\n\n### Objects Are Not Re-copied\n\nIf the key and byte size match, `s3mirror` treats the object as synchronized.\nRename the destination key, delete it, or change the source size if you need to\nforce a copy with the current implementation.\n\n### Cron Produces Too Much Output\n\nUse `--quiet` with `--log-file`:\n\n```bash\npython3 s3mirror.py --config /etc/s3mirror.yaml --log-file /var/log/s3mirror.log --quiet\n```\n\n### Self-Signed Endpoint Certificates\n\nSet `verify_ssl: false` only when the endpoint and network are trusted. The\nscript suppresses `urllib3` insecure certificate warnings so logs stay readable,\nbut the TLS risk still exists.\n\n---\n\n\u003ca id=\"contributing\"\u003e\u003c/a\u003e\n## 🤝 Contributing\n\nContributions are welcome. Useful areas include:\n\n- checksum-aware change detection\n- metadata, ACL, tag, or storage class mirroring\n- richer test coverage with mocked S3 endpoints\n- provider-specific compatibility notes\n- packaging and deployment examples\n- documentation improvements\n\nBefore opening a pull request:\n\n1. Create a focused branch for the change.\n2. Run Black, isort, and pylint locally.\n3. Include enough detail in the pull request for another operator to understand the behavior change.\n4. Call out any safety, deletion, or compatibility impact.\n\n---\n\n\u003ca id=\"license\"\u003e\u003c/a\u003e\n## 📄 License\n\nThis project is licensed under the [MIT License](LICENSE).\n\nDeveloped by Simon Oakes.\n","project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsoakes%2Fs3mirror","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fsoakes%2Fs3mirror","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsoakes%2Fs3mirror/lists"}