{"id":46069369,"url":"https://github.com/saworbit/orbit","last_synced_at":"2026-03-01T13:01:20.664Z","repository":{"id":297017208,"uuid":"995377258","full_name":"saworbit/orbit","owner":"saworbit","description":"Resilient file transfer engine with compression, resume, and parallel operations. Built in Rust.","archived":false,"fork":false,"pushed_at":"2026-02-28T11:19:55.000Z","size":3662,"stargazers_count":1,"open_issues_count":24,"forks_count":0,"subscribers_count":0,"default_branch":"main","last_synced_at":"2026-02-28T11:41:39.809Z","etag":null,"topics":["backup","checksum-verification","cli","compression","data-migration","file-transfer","parallel-processing","resume-support","rust","zstd"],"latest_commit_sha":null,"homepage":"https://github.com/saworbit/orbit/blob/main/README.md","language":"Rust","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/saworbit.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":"CONTRIBUTING.md","funding":".github/FUNDING.yml","license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":"SECURITY.md","support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null},"funding":{"custom":["https://buymeacoffee.com/sawrion"]}},"created_at":"2025-06-03T11:44:17.000Z","updated_at":"2026-02-28T11:19:59.000Z","dependencies_parsed_at":"2025-06-03T22:28:52.242Z","dependency_job_id":"52d10a33-974d-4ea2-a0e7-9f2fe4f72886","html_url":"https://github.com/saworbit/orbit","commit_stats":null,"previous_names":["saworbit/orbit"],"tags_count":5,"template":false,"template_full_name":null,"purl":"pkg:github/saworbit/orbit","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/saworbit%2Forbit","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/saworbit%2Forbit/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/saworbit%2Forbit/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/saworbit%2Forbit/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/saworbit","download_url":"https://codeload.github.com/saworbit/orbit/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/saworbit%2Forbit/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":29969700,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-03-01T12:56:10.327Z","status":"ssl_error","status_checked_at":"2026-03-01T12:55:24.744Z","response_time":124,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.6:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["backup","checksum-verification","cli","compression","data-migration","file-transfer","parallel-processing","resume-support","rust","zstd"],"created_at":"2026-03-01T13:01:19.634Z","updated_at":"2026-03-01T13:01:20.652Z","avatar_url":"https://github.com/saworbit.png","language":"Rust","readme":"# 🚀 Orbit\n\n\u003e **O**pen **R**esilient **B**ulk **I**nformation **T**ransfer\n\n**The intelligent file transfer tool that never gives up** 💪\n\n[![CI](https://github.com/saworbit/orbit/actions/workflows/ci.yml/badge.svg?branch=main)](https://github.com/saworbit/orbit/actions/workflows/ci.yml)\n[![Security Audit](https://github.com/saworbit/orbit/actions/workflows/compliance.yml/badge.svg)](https://github.com/saworbit/orbit/actions/workflows/compliance.yml)\n[![codecov](https://codecov.io/gh/saworbit/orbit/branch/main/graph/badge.svg)](https://codecov.io/gh/saworbit/orbit)\n[![Release](https://img.shields.io/github/v/release/saworbit/orbit)](https://github.com/saworbit/orbit/releases)\n[![Rust](https://img.shields.io/badge/rust-1.70%2B-blue)](https://www.rust-lang.org)\n[![Docker](https://img.shields.io/badge/docker-ghcr.io-blue)](https://ghcr.io/saworbit/orbit)\n[![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)\n[![GitHub](https://img.shields.io/github/stars/saworbit/orbit?style=social)](https://github.com/saworbit/orbit)\n\n---\n\n## ⚠️ Project Status: Alpha (v0.8.0 Core / v2.2.0-rc.1 Control Plane)\n\n**Orbit is currently in active development and should be considered alpha-quality software.**\n\n- ✅ **Safe for**: Experimentation, evaluation, non-critical workloads, development environments\n- ⚠️ **Use with caution for**: Important data transfers (test thoroughly first, maintain backups)\n- ❌ **Not recommended for**: Mission-critical production systems without extensive testing\n\n**What this means:**\n- APIs may change between versions\n- Some features are experimental and marked as such\n- The V2 architecture (content-defined chunking, semantic replication) is newly introduced\n- **NEW v0.8.0**: Usability \u0026 Automation (Phases 3-5) - Interactive `orbit init` wizard, active environment probing, and intelligent auto-tuning\n- **v0.6.0-alpha.5**: Phase 5 - Sentinel (Autonomous Resilience Engine) - OODA loop monitors Universe V3 and autonomously heals under-replicated chunks via Phase 4 P2P transfers\n- **v0.6.0-alpha.4**: Phase 4 - Data Plane (P2P Transfer) - Direct Star-to-Star data transfer eliminates Nucleus bandwidth bottleneck, enabling infinite horizontal scaling\n- **v0.6.0-alpha.3**: Phase 3 - Nucleus Client \u0026 RemoteSystem (Client-side connectivity for Nucleus-to-Star orchestration, 99.997% network reduction via compute offloading)\n- **v0.6.0-alpha.2**: Phase 2 - Star Protocol \u0026 Agent (gRPC remote execution server for distributed Orbit Grid)\n- **v0.6.0-alpha.1**: Phase 1 I/O Abstraction Layer - OrbitSystem trait enables future distributed topologies\n- **NEW v2.2.0-rc.1**: Full-stack CI/CD pipeline with dashboard-quality checks, professional file browser, and enhanced developer experience\n- **v2.2.0-beta.1**: Enterprise platform features - Intelligence API (Estimations), Administration (User Management), System Health monitoring\n- **v2.2.0-alpha.2**: React Dashboard implementation with Visual Pipeline Editor, File Browser, and Job Management UI\n- **v2.2.0-alpha.1**: Control Plane architecture with decoupled React dashboard (\"The Separation\")\n- Extensive testing in your specific environment is recommended before production use\n\nSee the [Feature Maturity Matrix](#-feature-maturity-matrix) below for per-feature stability status.\n\n---\n\n## 📑 Table of Contents\n\n- [Project Status](#️-project-status-alpha-v050)\n- [What is Orbit?](#-what-is-orbit)\n- [Why Orbit?](#-why-orbit)\n- [Feature Maturity Matrix](#-feature-maturity-matrix)\n- [Key Features](#-key-features)\n  - [Error Handling \u0026 Retries](#-error-handling--retries-never-give-up)\n  - [Disk Guardian](#️-disk-guardian-pre-flight-safety)\n  - [Guidance System](#️-guidance-system-the-flight-computer)\n  - [Manifest System + Starmap](#️-manifest-system--starmap-planner)\n  - [Magnetar State Machine](#-magnetar-persistent-job-state-machine)\n  - [Metadata Preservation](#️-metadata-preservation--transformation)\n  - [Delta Detection](#-delta-detection-efficient-transfers)\n  - [Progress Reporting \u0026 Operational Controls](#-progress-reporting--operational-controls)\n  - [Inclusion/Exclusion Filters](#-inclusionexclusion-filters-selective-transfers)\n  - [Protocol Support](#-protocol-support)\n  - [Audit \u0026 Telemetry](#-audit-and-telemetry)\n  - [Data Flow Patterns](#-data-flow-patterns)\n- [Quick Start](#-quick-start)\n- [Web GUI](#️-web-gui-new-in-v050)\n- [Performance Benchmarks](#-performance-benchmarks)\n- [Smart Strategy Selection](#-smart-strategy-selection)\n- [Use Cases](#-use-cases)\n- [Configuration](#️-configuration)\n- [Modular Architecture](#-modular-architecture)\n- [Security](#-security)\n- [Documentation](#-documentation)\n- [Roadmap](#-roadmap)\n- [Contributing](#-contributing)\n- [License](#-license)\n\n---\n\n## 🌟 What is Orbit?\n\nOrbit is a file transfer tool built in Rust that aims to combine reliability with performance. Whether you're backing up data, syncing files across locations, transferring to network shares, or moving data to the cloud, Orbit provides features designed to help.\n\n**Key Philosophy:** Intelligence, resilience, and speed. Currently in active development (v0.6.0 alpha).\n\n---\n\n## ✨ Why Orbit?\n\n| Feature | Benefit |\n|---------|---------|\n| 🚄 **Performance** | Zero-copy system calls for faster transfers (instant APFS cloning on macOS) |\n| 🛡️ **Resilient** | Smart resume with chunk verification, checksums, corruption detection |\n| 🧠 **Adaptive** | Adapts strategy based on environment (zero-copy, compression, buffered) |\n| 🛡️ **Safe** | Disk Guardian prevents mid-transfer failures with pre-flight checks |\n| 🌐 **Protocol Support** | Local, **SSH/SFTP**, SMB/CIFS (experimental), **S3**, **Azure Blob**, **GCS**, with unified backend API |\n| 🌐 **Web Dashboard** | Modern React dashboard with OpenAPI-documented Control Plane (v2.2.0-alpha) |\n| 📊 **Auditable** | Structured JSON telemetry for operations |\n| 🧩 **Modular** | Clean architecture with reusable crates |\n| 🌍 **Cross-Platform** | Linux, macOS, Windows with native optimizations |\n\n---\n\n## 🎯 Feature Maturity Matrix\n\nUnderstanding feature stability helps you make informed decisions about what to use in production.\n\n| Feature | Maturity | Notes |\n|---------|----------|-------|\n| **Core File Copy (Buffered)** | 🟢 Stable | Well-tested, safe for production use |\n| **Zero-Copy Optimization** | 🟢 Stable | Platform-specific (Linux, macOS, Windows) |\n| **OrbitSystem Abstraction (Phase 1)** | 🟢 Stable | I/O abstraction layer, foundation for Grid topology |\n| **Resume/Checkpoint** | 🟡 Beta | Works well, needs more edge-case testing |\n| **Compression (LZ4, Zstd)** | 🟢 Stable | Reliable for most workloads |\n| **Checksum Verification** | 🟢 Stable | BLAKE3 (default), SHA-256 well-tested |\n| **Local Filesystem** | 🟢 Stable | Primary use case, thoroughly tested |\n| **SSH/SFTP Backend** | 🟡 Beta | Functional, needs more real-world testing |\n| **S3 Backend** | 🟡 Beta | Works well, multipart upload is newer |\n| **SMB Backend** | 🟡 Beta | v0.11.0 upgrade complete, ready for integration testing |\n| **Azure Blob Backend** | 🟡 Beta | Production-ready using object_store crate, newly added in v0.6.0 |\n| **GCS Backend** | 🟡 Beta | Production-ready using object_store crate, newly added in v0.6.0 |\n| **Delta Detection (V1)** | 🟡 Beta | rsync-style algorithm, tested but newer |\n| **V2 Architecture (CDC)** | 🔴 Alpha | Content-defined chunking, introduced in v0.5.0 |\n| **Semantic Replication** | 🔴 Alpha | Priority-based transfers, introduced in v0.5.0 |\n| **Neutrino Fast Lane** | 🔴 Alpha | Small file optimization (\u003c8KB), introduced in v0.5.0 |\n| **Global Deduplication (V3)** | 🟡 Beta | High-cardinality Universe index, v2.1 scalability upgrade |\n| **Disk Guardian** | 🟡 Beta | Pre-flight checks, works well but newer |\n| **Magnetar State Machine** | 🟡 Beta | Job persistence, recently added |\n| **Resilience Patterns** | 🟡 Beta | Circuit breaker, rate limiting - new features |\n| **Sentinel Resilience Engine (Phase 5)** | 🔴 Alpha | Autonomous OODA loop for chunk redundancy healing |\n| **Filter System** | 🟡 Beta | Glob/regex filters, functional but newer |\n| **Metadata Preservation** | 🟡 Beta | Works well, extended attributes are platform-specific |\n| **Guidance System** | 🟡 Beta | Config validation with active probing (v0.7.0) |\n| **Init Wizard** | 🟡 Beta | Interactive setup with `orbit init` (v0.7.0) |\n| **Active Environment Probing** | 🟡 Beta | Auto-tuning based on hardware/destination (v0.7.0) |\n| **Terminology Abstraction** | 🟢 Stable | User-friendly interface layer (v0.7.0) |\n| **Control Plane API** | 🔴 Alpha | v2.2.0-alpha - OpenAPI/Swagger documented REST API |\n| **React Dashboard** | 🔴 Alpha | v2.2.0-alpha - Modern SPA with React Flow pipelines |\n| **Manifest System** | 🟡 Beta | File tracking and verification |\n| **Progress/Bandwidth Limiting** | 🟡 Beta | Recently integrated across all modes |\n| **Audit Logging** | 🟡 Beta | Structured telemetry, needs more use |\n| **Sparse File Handling** | 🟡 Beta | Zero-chunk detection during CDC, hole-aware writes |\n| **Hardlink Preservation** | 🟡 Beta | Inode tracking on Unix/Windows, `--preserve-hardlinks` flag |\n| **In-Place Updates** | 🟡 Beta | Reflink/journaled/unsafe safety tiers, `--inplace` |\n| **Rename Detection** | 🔴 Alpha | Content-aware via Star Map chunk overlap |\n| **Link-Dest++ (Incremental Backup)** | 🔴 Alpha | Chunk-level reference hardlinking, `--link-dest` |\n| **Transfer Journal (Batch Mode)** | 🔴 Alpha | Content-addressed operation journal, `--write-batch` / `--read-batch` |\n| **Backpressure** | 🔴 Alpha | Dual-threshold flow control (object count + byte size) |\n| **Penalization** | 🔴 Alpha | Exponential backoff deprioritization of failed items |\n| **Dead-Letter Queue** | 🔴 Alpha | Bounded quarantine for permanently failed items |\n| **Health Monitor** | 🔴 Alpha | Continuous mid-transfer health checks with advisories |\n| **Ref-Counted GC** | 🔴 Alpha | WAL-gated garbage collection for shared chunks |\n| **Container Packing** | 🔴 Alpha | Chunk packing into `.orbitpak` files to reduce inode pressure |\n| **Typed Provenance** | 🔴 Alpha | Structured event taxonomy for audit lineage queries |\n| **Bulletin Board** | 🔴 Alpha | Centralized error/warning aggregation across Grid nodes |\n| **Composable Prioritizers** | 🔴 Alpha | Chainable sort criteria for transfer scheduling |\n| **Star Lifecycle Hooks** | 🔴 Alpha | Formalized state machine for Star agent lifecycle |\n\n**Legend:**\n- 🟢 **Stable**: Production-ready with extensive testing\n- 🟡 **Beta**: Functional and tested, but needs more real-world validation\n- 🔴 **Alpha**: Experimental, expect changes and potential issues\n\n---\n\n## 🔑 Key Features\n\n### 🔄 Error Handling \u0026 Retries: Never Give Up\n\n**NEW in v0.4.1!** Intelligent error handling with retry logic and comprehensive diagnostics.\n\n**Features:**\n- **Smart Retry Logic** — Exponential backoff with jitter to avoid thundering herd\n- **Error Classification** — Distinguishes transient (retry-worthy) from fatal errors\n- **Flexible Error Modes** — Abort, Skip, or Partial (keep incomplete files for resume)\n- **Default Statistics Tracking** — Retry metrics (attempts, successes, failures) are collected and emitted automatically during copy operations\n- **Structured Logging** — Tracing integration for detailed diagnostics\n- **Resilient Sync Verification** — Detects source changes during copy and retries or fails safely\n\n**Default Retry Metrics:**\n\nRetry metrics are now collected and emitted by default for all `copy_file` operations, enhancing observability for data migration, transport, and storage workflows. When retries or failures occur, you'll see output like:\n\n```\n[orbit] Retry metrics: 2 retries, 1 successful, 0 failed, 0 skipped\n```\n\nControl emission with the `ORBIT_STATS` environment variable:\n- `ORBIT_STATS=off` — Disable default emission (for high-volume transfers)\n- `ORBIT_STATS=verbose` — Always emit, even for successful operations with no retries\n\n**Error Modes:**\n- **Abort** (default) — Stop on first error for maximum safety\n- **Skip** — Skip failed files, continue with remaining files\n- **Partial** — Keep partial files and retry, perfect for unstable networks\n\n**Smart Retry Logic:**\n- ⚡ **Permanent errors fail fast** — `PermissionDenied`, `AlreadyExists`, `Compression`, `Decompression` skip retries entirely\n- 🔄 **Transient errors retry** — `TimedOut`, `ConnectionRefused`, `Protocol` use full exponential backoff\n- 🎯 **Intelligent classification** — Allow-list approach ensures only truly transient errors are retried\n\n```bash\n# Resilient transfer with retries and logging\norbit --source /data --dest /backup --recursive \\\n      --retry-attempts 5 \\\n      --exponential-backoff \\\n      --error-mode partial \\\n      --log-level debug \\\n      --log /var/log/orbit.log\n\n# Quick skip mode for batch operations\norbit -s /source -d /dest -R \\\n      --error-mode skip \\\n      --verbose\n\n# Disable stats emission for high-volume batch transfers\nORBIT_STATS=off orbit --source /data --dest /backup --recursive\n```\n\n**Programmatic Statistics Tracking:**\n\nFor aggregated metrics across batch operations, pass a custom `OperationStats` instance:\n\n```rust\nuse orbit::{CopyConfig, OperationStats, copy_file_with_stats};\n\n// For aggregated stats across multiple files:\nlet stats = OperationStats::new();\nfor file in \u0026files {\n    copy_file_with_stats(\u0026file.src, \u0026file.dest, \u0026config, Some(\u0026stats))?;\n}\nstats.emit(); // Emit once after all operations\n\n// Get detailed snapshot for programmatic access\nlet snapshot = stats.snapshot();\nprintln!(\"Success rate: {:.1}%\", snapshot.success_rate());\nprintln!(\"Total retries: {}\", snapshot.total_retries);\n```\n\n**Error Categories Tracked:**\n- Validation (path errors)\n- I/O operations\n- Network/protocol issues\n- Resource constraints (disk, memory)\n- Data integrity (checksums)\n- And 11 more categories for comprehensive diagnostics\n\n### 🛡️ Disk Guardian: Pre-Flight Safety\n\n**NEW in v0.4.1!** Comprehensive disk space and filesystem validation to prevent mid-transfer failures.\n\n**Prevents:**\n- ❌ Mid-transfer disk-full errors\n- ❌ OOM conditions from insufficient space\n- ❌ Transfers to read-only filesystems\n- ❌ Permission errors (detected early)\n\n**Features:**\n- **Safety Margins** — 10% extra space by default, fully configurable\n- **Minimum Free Space** — Always leaves 100 MB free (configurable)\n- **Filesystem Integrity** — Write permissions, read-only detection\n- **Staging Areas** — Atomic transfers with temporary staging\n- **Live Monitoring** — Optional filesystem watching (via `notify` crate)\n- **Directory Estimation** — Pre-calculate space needed for directory transfers\n\n```bash\n# Automatic pre-flight checks for directory transfers\norbit --source /data --dest /backup --recursive\n# Output:\n# Performing pre-flight checks...\n# Estimated transfer size: 5368709120 bytes\n# ✓ Sufficient disk space (with safety margin)\n```\n\n**Manual API:**\n```rust\nuse orbit::core::disk_guardian::{ensure_transfer_safety, GuardianConfig};\n\nlet config = GuardianConfig {\n    safety_margin_percent: 0.10,      // 10% extra\n    min_free_space: 100 * 1024 * 1024, // 100 MB\n    check_integrity: true,\n    enable_watching: false,\n};\n\nensure_transfer_safety(dest_path, required_bytes, \u0026config)?;\n```\n\n**Try it:**\n```bash\ncargo run --example disk_guardian_demo\n```\n\n📖 **Full Documentation:** See [`docs/DISK_GUARDIAN.md`](docs/DISK_GUARDIAN.md)\n\n---\n\n### 🛰️ Guidance System: The \"Flight Computer\"\n\n**NEW in v0.7.0!** Enhanced with active environment probing and intelligent auto-tuning.\n\nAutomatic configuration validation and optimization that ensures safe, performant transfers, now with active hardware and destination detection.\n\n**What It Does:**\nThe Guidance System acts as an intelligent pre-processor, analyzing your configuration for logical conflicts and automatically resolving them before execution begins. **NEW**: It now actively probes your system environment (CPU, RAM, I/O speed, destination type) and auto-tunes settings for optimal performance.\n\n**Key Benefits:**\n- 🔒 **Safety First** — Prevents data corruption from incompatible flag combinations\n- ⚡ **Performance Optimization** — Automatically selects the fastest valid strategy\n- 🧠 **Active Probing** — Detects hardware, I/O speed, and destination type (v0.7.0)\n- 🎯 **Auto-Tuning** — Optimizes for SMB, cloud storage, slow I/O, low memory (v0.7.0)\n- 🎓 **Educational** — Explains why configurations were changed\n- 🤖 **Automatic** — No manual debugging of conflicting flags\n\n**Example Output:**\n```\n┌── 🛰️  Orbit Guidance System ───────────────────────┐\n│ 🚀 Strategy: Disabling zero-copy to allow streaming checksum verification\n│ 🛡️  Safety: Disabling resume capability to prevent compressed stream corruption\n│ 🔧 Network: Detected SMB destination. Enabling resume for reliability.\n│ 🔧 Performance: Detected slow I/O (45.2 MB/s) with 16 cores. Enabling Zstd:3.\n└────────────────────────────────────────────────────┘\n```\n\n**Implemented Rules:**\n\n| Rule | Conflict | Resolution | Icon |\n|------|----------|------------|------|\n| **Hardware** | Zero-copy on unsupported OS | Disable zero-copy | ⚠️ |\n| **Strategy** | Zero-copy + Checksum | Disable zero-copy (streaming is faster) | 🚀 |\n| **Integrity** | Resume + Checksum | Disable checksum (can't verify partial file) | 🛡️ |\n| **Safety** | Resume + Compression | Disable resume (can't append to streams) | 🛡️ |\n| **Precision** | Zero-copy + Resume | Disable zero-copy (need byte-level seeking) | 🚀 |\n| **Visibility** | Manifest + Zero-copy | Disable zero-copy (need content inspection) | 🚀 |\n| **Logic** | Delta + Zero-copy | Disable zero-copy (need patch logic) | 🚀 |\n| **Control** | macOS + Bandwidth + Zero-copy | Disable zero-copy (can't throttle fcopyfile) | ⚠️ |\n| **UX** | Parallel + Progress bars | Info notice (visual artifacts possible) | ℹ️ |\n| **Performance** | Sync + Checksum mode | Info notice (forces dual reads) | ℹ️ |\n| **Physics** | Compression + Encryption | Placeholder (encrypted data won't compress) | 🚀 |\n| **🆕 Network Auto-Tune** | SMB/NFS destination | Enable resume + increase retries | 🔧 |\n| **🆕 CPU/IO Optimization** | ≥8 cores + \u003c50 MB/s I/O | Enable Zstd:3 compression | 🔧 |\n| **🆕 Low Memory** | \u003c1GB RAM + \u003e4 parallel | Reduce to 2 parallel operations | 🔧 |\n| **🆕 Cloud Storage** | S3/Azure/GCS destination | Enable compression + backoff | 🔧 |\n\n**Philosophy:**\n\u003e Users express **intent**. Orbit ensures **technical correctness**.\n\nRather than failing with cryptic errors, Orbit understands what you're trying to achieve and automatically adjusts settings to make it work safely and efficiently.\n\n**Programmatic API:**\n```rust\nuse orbit::core::guidance::Guidance;\n\nlet mut config = CopyConfig::default();\nconfig.use_zero_copy = true;\nconfig.verify_checksum = true;\n\n// Run guidance pass\nlet flight_plan = Guidance::plan(config)?;\n\n// Display notices\nfor notice in \u0026flight_plan.notices {\n    println!(\"{}\", notice);\n}\n\n// Use optimized config\ncopy_file(\u0026source, \u0026dest, \u0026flight_plan.config)?;\n```\n\n📖 **Full Documentation:** See [`docs/architecture/GUIDANCE_SYSTEM.md`](docs/architecture/GUIDANCE_SYSTEM.md)\n\n---\n\n### 🗂️ Manifest System + Starmap Planner\n\nOrbit v0.4 introduces a **manifest-based transfer framework** with flight plans, cargo manifests, and verification tools.\n\n#### Current Workflow (v0.4.1)\n```bash\n# 1. Create flight plan (transfer metadata)\norbit manifest plan --source /data --dest /backup --output ./manifests\n\n# 2. Execute transfer with manifest generation\norbit --source /data --dest /backup --recursive \\\n  --generate-manifest --manifest-dir ./manifests\n\n# 3. Verify transfer integrity\norbit manifest verify --manifest-dir ./manifests\n```\n\n#### 🔭 Current Starmap Features\n\n- **Flight Plans** — JSON-based transfer metadata and file tracking\n- **Cargo Manifests** — Per-file chunk-level verification\n- **Verification Tools** — Post-transfer integrity checking\n- **Diff Support** — Compare manifests with target directories\n- **Audit Integration** — Full traceability for every operation\n\n#### 🚧 Planned: Declarative Manifests (v0.6.0+)\n\n**Future support for TOML-based job definitions:**\n\n```toml\n# orbit.manifest.toml (PLANNED)\n[defaults]\nchecksum = \"sha256\"\ncompression = \"zstd:6\"\nresume = true\n\n[[job]]\nname = \"source-sync\"\nsource = \"/data/source/\"\ndestination = \"/mnt/backup/source/\"\n\n[[job]]\nname = \"media-archive\"\nsource = \"/media/camera/\"\ndestination = \"/tank/archive/\"\ndepends_on = [\"source-sync\"]  # Dependency ordering\n```\n\n---\n\n### 🧲 Magnetar: Persistent Job State Machine\n\n**NEW in v0.4.1!** A crash-proof, idempotent state machine for managing persistent jobs with dual backend support.\n\n**Prevents:**\n- ❌ Duplicate work after crashes\n- ❌ Lost progress on interruptions\n- ❌ Dependency conflicts in DAG-based workflows\n- ❌ Cascading failures from flaky external services\n\n**Features:**\n- **Atomic Claims** — Idempotent \"pending → processing\" transitions\n- **Crash Recovery** — Resume from any point with chunk-level verification\n- **DAG Dependencies** — Topological sorting for complex job graphs\n- **Dual Backends** — SQLite (default) or redb (pure Rust, WASM-ready)\n- **Zero-Downtime Migration** — Swap backends without stopping jobs\n- **Analytics Ready** — Export to Parquet for analysis\n- **Resilience Module** — Circuit breaker, connection pooling, and rate limiting for fault-tolerant data access ⭐ **NEW!**\n\n```rust\nuse magnetar::JobStatus;\n\n#[tokio::main]\nasync fn main() -\u003e anyhow::Result\u003c()\u003e {\n    let mut store = magnetar::open(\"jobs.db\").await?;\n\n    // Load chunks from manifest\n    let manifest = toml::from_str(r#\"\n        [[chunks]]\n        id = 1\n        checksum = \"abc123\"\n    \"#)?;\n\n    store.init_from_manifest(42, \u0026manifest).await?;\n\n    // Process with automatic deduplication\n    while let Some(chunk) = store.claim_pending(42).await? {\n        // Do work... (if crash happens, chunk auto-reverts to pending)\n        store.mark_status(42, chunk.chunk, JobStatus::Done, None).await?;\n    }\n\n    Ok(())\n}\n```\n\n**Try it:**\n```bash\ncd crates/magnetar\ncargo run --example basic_usage\ncargo run --example crash_recovery  # Simulates crash and resume\ncargo run --example resilience_demo --features resilience  # Circuit breaker demo\n```\n\n#### 🛡️ Resilience Module\n\n**NEW in v0.4.1!** Built-in resilience patterns for fault-tolerant access to flaky external services like S3, SMB, and databases.\n\n**Components:**\n- **Circuit Breaker** — Fail-fast protection with automatic recovery\n- **Connection Pool** — Efficient connection reuse with health checking\n- **Rate Limiter** — Token bucket rate limiting to prevent service overload\n\n```rust\nuse magnetar::resilience::prelude::*;\nuse std::sync::Arc;\n\n// Setup resilience stack\nlet breaker = CircuitBreaker::new_default();\nlet pool = Arc::new(ConnectionPool::new_default(factory));\nlet limiter = RateLimiter::per_second(100);\n\n// Execute with full protection\nbreaker.execute(|| {\n    let pool = pool.clone();\n    let limiter = limiter.clone();\n    async move {\n        limiter.execute(|| async {\n            let conn = pool.acquire().await?;\n            let result = perform_s3_operation(\u0026conn).await;\n            pool.release(conn).await;\n            result\n        }).await\n    }\n}).await?;\n```\n\n**Resilience Features:**\n- ✅ Three-state circuit breaker (Closed → Open → HalfOpen)\n- ✅ Exponential backoff with configurable retries\n- ✅ Generic connection pool with health checks\n- ? Pending-creation tracking prevents idle over-provisioning under concurrency\n- ✅ Pool statistics and monitoring\n- ✅ Idle timeout and max lifetime management\n- ✅ Rate limiting with token bucket algorithm\n- ✅ Optional governor crate integration\n- ✅ Thread-safe async/await support\n- ✅ Transient vs permanent error classification\n- ✅ S3 and SMB integration examples\n\n📖 **Full Documentation:** See [`crates/magnetar/README.md`](crates/magnetar/README.md) and [`crates/magnetar/src/resilience/README.md`](crates/magnetar/src/resilience/README.md)\n\n---\n\n### 🔄 Data Flow Patterns\n\n**10 production-grade modules** for reliable, observable data transfer pipelines — implemented across 6 crates with 180+ tests.\n\n```text\n┌─────────────┐     ┌──────────────┐     ┌─────────────┐\n│  Ingest      │────\u003e│  Transfer    │────\u003e│  Delivery   │\n│              │     │  Pipeline    │     │             │\n│ Prioritizer  │     │ Backpressure │     │ Container   │\n│ Provenance   │     │ Penalization │     │ Packing     │\n│ Lifecycle    │     │ Health Mon.  │     │ Bulletin    │\n└─────────────┘     │ Dead-Letter  │     │ Board       │\n                    │ Ref-Count GC │     └─────────────┘\n                    └──────────────┘\n```\n\n**Flow Control \u0026 Resilience** (`core-resilience`):\n- **Backpressure** — Dual-threshold guards (object count + byte size) with apply/release semantics\n- **Penalization** — Exponential backoff deprioritization with configurable cap and decay\n- **Dead-Letter Queue** — Bounded quarantine with reason tracking, retry, and drain support\n- **Health Monitor** — Continuous mid-transfer health checks emitting typed advisories\n- **Ref-Counted GC** — WAL-gated garbage collection preventing premature deletion of shared chunks\n\n**Scheduling \u0026 Intelligence** (`core-semantic`):\n- **Composable Prioritizers** — Chainable sort criteria (size, age, priority, name) with `then()` composition\n\n**Observability \u0026 Lifecycle**:\n- **Typed Provenance** (`core-audit`) — Structured event taxonomy for lineage queries\n- **Bulletin Board** (`orbit-connect`) — Lock-free ring buffer for error/warning aggregation across Grid nodes\n- **Star Lifecycle Hooks** (`orbit-star`) — Formalized state machine (Registered → Scheduled → Draining → Shutdown)\n- **Container Packing** (`core-starmap`) — Chunk packing into `.orbitpak` files to reduce inode pressure\n\n```rust\nuse orbit_core_resilience::{Backpressure, BackpressureConfig};\n\n// Apply backpressure before sending chunks\nlet bp = Backpressure::new(BackpressureConfig {\n    max_objects: 1000,\n    max_bytes: 64 * 1024 * 1024, // 64 MB\n    ..Default::default()\n});\n\nif bp.try_acquire(1, chunk_size).is_ok() {\n    send_chunk(chunk).await?;\n    bp.release(1, chunk_size);\n}\n```\n\n📖 **Full Documentation:** See [`docs/architecture/DATA_FLOW_PATTERNS.md`](docs/architecture/DATA_FLOW_PATTERNS.md)\n\n---\n\n### 🏷️ Metadata Preservation \u0026 Transformation\n\n**NEW in v0.4.1!** Comprehensive file metadata preservation with transformation capabilities for cross-platform transfers and reproducible builds.\n\n**Default Metadata Support:**\n- **Timestamps** — Access time (atime), modification time (mtime), creation time (ctime)\n- **Permissions** — Unix mode bits, Windows file attributes\n\n**Extended Metadata Support** (requires `extended-metadata` feature):\n- **Ownership** — User ID (UID) and Group ID (GID) on Unix systems\n- **Extended Attributes (xattrs)** — User-defined metadata on supported filesystems\n\nTo enable extended metadata preservation:\n```toml\n[dependencies]\norbit = { version = \"0.6.0\", features = [\"extended-metadata\"] }\n```\n\n\u003e **Note:** Extended attributes have platform limitations (e.g., partial or no support on Windows, requires compatible filesystem on Unix). Ownership preservation typically requires root/administrator privileges.\n\n**Features:**\n- **Selective Preservation** — Choose exactly what to preserve: `times,perms,owners,xattrs`\n- **Path Transformations** — Regex-based renaming with sed-like syntax: `s/old/new/`\n- **Case Conversion** — Lowercase, uppercase, or titlecase filename normalization\n- **Metadata Filtering** — Strip ownership, permissions, or xattrs for privacy/portability\n- **Cross-Platform** — Graceful fallbacks on unsupported platforms\n- **Backend Integration** — Works with local, SSH, S3 (extensible)\n- **Strict Mode** — Configurable error handling (warn vs. fail)\n- **Verification** — Post-transfer metadata validation\n\n**Use Cases:**\n- ✅ Cross-platform migrations (Unix → Windows, macOS → Linux)\n- ✅ Reproducible builds (normalize timestamps, strip metadata)\n- ✅ Privacy-aware backups (strip ownership information)\n- ✅ Cloud storage with metadata (preserve via manifest integration)\n- ✅ Archival compliance (preserve extended attributes, ACLs)\n\n```bash\n# Basic metadata preservation\norbit --source /data --dest /backup --recursive --preserve-metadata\n\n# Selective preservation with detailed flags\norbit --source /data --dest /backup \\\n  --preserve=times,perms,owners,xattrs \\\n  --verify-metadata\n\n# With path transformations\norbit --source /photos --dest /archive \\\n  --preserve=all \\\n  --transform=\"rename:s/IMG_/photo_/,case:lower\"\n\n# Strip sensitive metadata for cloud\norbit --source /data --dest s3://bucket/data \\\n  --preserve=times,perms \\\n  --transform=\"strip:ownership,strip:xattrs\"\n\n# Strict mode (fail on any metadata error)\norbit --source /critical --dest /backup \\\n  --preserve=all \\\n  --strict-metadata\n```\n\n**Preservation Flags:**\n- `times` — Access and modification timestamps (default)\n- `perms` — Unix permissions (mode bits) (default)\n- `owners` — User and group ownership (UID/GID) (requires privileges)\n- `xattrs` — Extended attributes (requires `extended-metadata` feature, Unix-like systems only)\n- `all` — Preserve everything (full support requires `extended-metadata` feature)\n\n**Transformation Options:**\n- `rename:pattern=replacement` — Regex-based path renaming\n- `case:lower|upper|title` — Filename case conversion\n- `strip:xattrs|ownership|permissions` — Remove metadata\n- `normalize:timestamps` — Set all timestamps to epoch (reproducible builds)\n\n📖 **API Documentation:** See `src/core/file_metadata.rs`, `src/core/transform.rs`, and `src/core/metadata_ops.rs`\n\n---\n\n### 🔄 Delta Detection: Efficient Transfers\n\n**NEW in v0.4.1!** rsync-inspired delta algorithm that minimizes bandwidth by transferring only changed blocks.\n\n**Orbit V2 Architecture** 🚀\n\n**UPGRADED in v2.1: Universe Scalability** 🌌\n- **High-Cardinality Performance** — Eliminated O(N²) write amplification bottleneck in Universe index\n  - **Multimap Architecture**: Uses `redb::MultimapTableDefinition` for discrete location entries\n  - **O(log N) Inserts**: Constant-time performance regardless of duplicate count (was O(N) in V2)\n  - **Streaming Iteration**: O(1) memory usage via `scan_chunk()` callback API\n  - **Production Scale**: Handles billions of chunks with millions of duplicates per chunk\n  - **Benchmark**: 20,000 duplicates - last batch 0.55x faster than first (V2 would be ~200x slower)\n  - **See:** [SCALABILITY_SPEC.md](docs/architecture/SCALABILITY_SPEC.md) for technical details\n\n- **Content-Defined Chunking (CDC)** — Gear Hash CDC solves the \"shift problem\" with 99.1% chunk preservation\n- **Semantic Prioritization** — Intelligent file classification with 4-tier priority system for optimized disaster recovery\n  - **Critical(0)**: Configs (.toml, .json, .yaml, .lock) → AtomicReplace strategy\n  - **High(10)**: WAL files (pg_wal/*, *.wal, *.binlog) → AppendOnly strategy\n  - **Normal(50)**: Source code, documents → ContentDefined strategy\n  - **Low(100)**: Media, archives, disk images (.iso, .zip, .mp4) → ContentDefined strategy\n  - **Extensible**: Custom adapters via `SemanticAdapter` trait\n- **Global Deduplication** — Identical chunks stored once, regardless of file location\n- **Universe Map** — Repository-wide content-addressed index for cross-file deduplication\n- **100% Rename Detection** — Renaming a file results in 0 bytes transferred\n- **Smart Sync Mode** — Priority-ordered transfers using BinaryHeap for semantic-aware replication\n  - Automatically detects when `check_mode_str = \"smart\"` is configured\n  - 3-phase algorithm: Scan → Analyze → Queue → Execute in priority order\n  - Ensures critical files (configs) are transferred before low-priority files (backups, media)\n  - ~60% faster disaster recovery via semantic prioritization\n- **Persistent Universe** — ACID-compliant embedded database for chunk index persistence (Stage 4)\n  - Uses redb for zero-copy, memory-mapped storage with full ACID guarantees\n  - Data survives application restarts (verified with drop \u0026 re-open tests)\n  - ChunkLocation tracking: Full path + offset + length for precise deduplication\n  - Atomic chunk claims (`try_claim_chunk`) avoid duplicate transfers under concurrency\n  - 4/4 persistence tests passing\n- **See:** [ORBIT_V2_ARCHITECTURE.md](ORBIT_V2_ARCHITECTURE.md) for complete details\n\n**Neutrino Fast Lane** ⚡\n\nThe **Neutrino Fast Lane** provides ~3x performance improvement for small-file workloads by bypassing CDC/deduplication overhead:\n\n- **Smart Routing** — Files \u003c8KB automatically routed to high-concurrency direct transfer\n- **High Concurrency** — 100-500 concurrent async tasks (vs standard 16)\n- **Zero Overhead** — Bypasses BLAKE3 hashing, CDC chunking, and starmap indexing\n- **Reduced CPU Load** — Direct I/O without rolling hash computation\n- **Configurable Threshold** — Adjustable size threshold (default: 8KB)\n- **Seamless Integration** — Works with Smart Sync priority-based transfers\n\n**Performance:**\n- 10,000 files (1-4KB): ~15s vs ~45s (standard) = **3x faster**\n- 60% lower CPU usage for small-file workloads\n- Minimal database bloat (no index entries for small files)\n\n**Usage:**\n```bash\n# Enable Neutrino fast lane\norbit --source /source --dest /dest --profile neutrino --recursive\n\n# Custom threshold (16KB)\norbit --source /source --dest /dest --profile neutrino --neutrino-threshold 16 --recursive\n\n# Combined with Checksum Sync\norbit --source /source --dest /dest --check checksum --profile neutrino --recursive\n```\n\n**Best For:**\n- Source code repositories (`node_modules`, `.git` directories)\n- Configuration directories (`/etc`, `.config`)\n- Log files and small assets\n- npm/pip package directories\n\n**Requirements:** Requires `backend-abstraction` feature (included with network backends)\n\n**See:** [PERFORMANCE.md](docs/guides/PERFORMANCE.md#neutrino-fast-lane-v05) for detailed documentation\n\n**V2 CDC Features:**\n- **Gear Hash Rolling Hash** — 256-entry lookup table for fast boundary detection (~2GB/s per core)\n- **Shift-Resilient** — Inserting 1 byte preserves 99.1% of chunks (vs 0% with fixed-size blocks)\n- **Variable Chunks** — 8KB min, 64KB avg, 256KB max (configurable)\n- **BLAKE3 Hashing** — Cryptographically secure content identification\n- **Iterator-Based API** — Memory-efficient streaming with `ChunkStream\u003cR: Read\u003e`\n- **Threshold-Based Cuts** — Robust chunking across different data patterns\n\n**Features:**\n- **4 Detection Modes** — ModTime (fast), Size, Checksum (BLAKE3), Delta (block-based)\n- **Rolling Checksum** — Gear64 (default, 64-bit) or Adler-32 (legacy, 32-bit)\n- **Slice \u0026 Emit Buffering** — Non-matching spans flush as slices (no per-byte allocations) for much faster 0% similarity workloads\n- **Parallel Hashing** — Rayon-based concurrent block processing\n- **Smart Fallback** — Automatic full copy for incompatible files\n- **80-99% Savings** — For files with minor changes\n- **Configurable Blocks** — 64KB to 4MB block sizes\n- **Resume Handling** — Partial manifest support for interrupted transfers (NEW!)\n\n**Use Cases:**\n- ✅ Daily database backups (90-95% savings)\n- ✅ VM image updates (85-95% savings)\n- ✅ Large file synchronization over slow links\n- ✅ Log file rotation (95-99% savings for append-only)\n- ✅ Fault-tolerant transfers over unreliable networks (NEW!)\n\n```bash\n# Basic delta transfer\norbit --source bigfile.iso --dest bigfile.iso --check delta\n\n# Recursive sync with custom block size\norbit --source /data --dest /backup --recursive \\\n  --check delta --block-size 512\n\n# With resume for large files\norbit --source vm.qcow2 --dest backup/vm.qcow2 \\\n  --check delta --resume --block-size 2048\n```\n\n**Delta Resume Handling (NEW!):**\n\nDelta transfers now support resume capability via partial manifests for fault-tolerant operations. On failure, a `{dest}.delta.partial.json` manifest is saved; subsequent calls will resume if possible.\n\n```rust\nuse orbit::{CopyConfig, copy_file};\nuse orbit::core::delta::CheckMode;\n\nlet mut config = CopyConfig::default();\nconfig.check_mode = CheckMode::Delta;\nconfig.delta_resume_enabled = true;  // Enabled by default\nconfig.delta_chunk_size = 1024 * 1024;  // 1MB chunks\n\n// Attempts delta with resume; falls back on non-resumable errors\ncopy_file(\u0026src, \u0026dest, \u0026config)?;\n```\n\nFor large data migrations, enable retries at higher levels to leverage resumes. Disable resume with `config.delta_resume_enabled = false` if not needed.\n\n**Manifest Generation (NEW!):**\n\nWhen `update_manifest` is enabled and a `manifest_path` is provided, Orbit will emit or update a manifest database post-transfer, tracking file metadata and checksums. Use `ignore_existing` to skip updates if the manifest already exists.\n\n```rust\nuse orbit::core::delta::{DeltaConfig, copy_with_delta_fallback, ManifestDb};\nuse std::path::PathBuf;\n\nlet mut config = DeltaConfig::default();\nconfig.update_manifest = true;\nconfig.manifest_path = Some(PathBuf::from(\"transfer_manifest.json\"));\nconfig.ignore_existing = false;  // Update existing manifest (default)\n\n// Delta transfer with automatic manifest update\nlet (stats, checksum) = copy_with_delta_fallback(\u0026src, \u0026dest, \u0026config)?;\n\nif stats.manifest_updated {\n    println!(\"Manifest updated with checksum: {:?}\", checksum);\n}\n\n// Load manifest for custom analytics or auditing\nlet manifest = ManifestDb::load(\u0026PathBuf::from(\"transfer_manifest.json\"))?;\nfor (path, entry) in manifest.iter() {\n    println!(\"{}: {} bytes, delta_used={}\", path.display(), entry.size, entry.delta_used);\n}\n```\n\n**Manifest Features:**\n- **Automatic Updates** — Manifests are updated after successful delta or fallback transfers\n- **Entry Tracking** — Each file entry includes source path, destination path, checksum, size, modification time, and delta statistics\n- **JSON Format** — Human-readable and machine-parseable manifest format\n- **Validation** — `config.validate_manifest()` ensures proper configuration before transfer\n\n**Performance:**\n- 1GB file with 5% changes: **10x faster** (3s vs 30s), **95% less data** (50MB vs 1GB)\n- Identical files: **99% savings** with minimal CPU overhead\n\n📖 **Full Documentation:** See [`docs/DELTA_DETECTION_GUIDE.md`](docs/DELTA_DETECTION_GUIDE.md) and [`docs/DELTA_QUICKSTART.md`](docs/DELTA_QUICKSTART.md)\n\n---\n\n### 📊 Progress Reporting \u0026 Operational Controls\n\n**NEW in v0.4.1!** Production-grade progress tracking, simulation mode, bandwidth management, and concurrency control for enterprise workflows.\n\n**Features:**\n- **Enhanced Progress Bars** — Multi-transfer tracking with `indicatif`, real-time ETA and speed\n- **Dry-Run Mode** — Safe simulation and planning before actual transfers\n- **Bandwidth Limiting** — Token bucket rate limiting (`governor`) **fully integrated** across all copy modes ⭐\n- **Concurrency Control** — Semaphore-based parallel operation management **fully integrated** ⭐\n- **Verbosity Levels** — Detailed logging with structured tracing\n- **Multi-Transfer Support** — Concurrent progress bars for parallel operations\n- **Zero New Dependencies** — Leveraged existing infrastructure\n\n**What's New:**\n- ✅ **BandwidthLimiter** now integrated into buffered, LZ4, Zstd, and zero-copy operations\n- ✅ **ConcurrencyLimiter** now integrated into directory copy with RAII permits\n- ✅ **Zero-copy** now supports bandwidth limiting (Linux/macOS with 1MB chunks)\n- ✅ **Throttle logging** for monitoring rate limit events (debug level)\n- ✅ **Load tests** verify accuracy of rate limiting and concurrency control\n\n**Use Cases:**\n- ✅ Preview large migrations before executing (dry-run)\n- ✅ **Limit bandwidth to avoid network saturation or cloud costs**\n- ✅ **Control resource usage with fine-grained concurrency limits**\n- ✅ Monitor complex parallel transfers with real-time progress\n- ✅ Test filter rules and transformations safely\n\n```bash\n# Preview transfer with dry-run\norbit -s /data -d /backup -R --dry-run --verbose\n# Output:\n# [DRY-RUN] Would copy: /data/file1.txt -\u003e /backup/file1.txt (1024 bytes)\n# [DRY-RUN] Would skip: /data/file2.txt - already exists\n#\n# Dry-Run Summary:\n#   Files to copy:    5\n#   Files to skip:    2\n#   Total data size:  10.5 MB\n\n# Limit bandwidth to 10 MB/s with 4 concurrent transfers\norbit -s /large/dataset -d /backup \\\n  --recursive \\\n  --max-bandwidth 10 \\\n  --parallel 4 \\\n  --show-progress\n\n# Bandwidth limiting now works with zero-copy!\norbit -s /large/file.bin -d /backup/file.bin --max-bandwidth 10\n\n# Auto-detect optimal concurrency (2x CPU cores, capped at 16)\n# Note: If CPU detection fails (restricted containers/cgroups), defaults to 1 thread with warning\norbit -s /data -d /backup -R --parallel 0\n\n# Full-featured production transfer\norbit -s /production/data -d /backup/location \\\n  --recursive \\\n  --max-bandwidth 10 \\\n  --parallel 8 \\\n  --show-progress \\\n  --resume \\\n  --retry-attempts 5 \\\n  --exponential-backoff \\\n  --verbose\n```\n\n**Progress Features:**\n- Real-time transfer speed (MB/s)\n- Accurate ETA calculations\n- Per-file progress tracking\n- Support for concurrent transfers\n- Terminal-friendly progress bars\n\n**Bandwidth Limiting:**\n- Token bucket algorithm for smooth throttling (`governor` crate)\n- Configurable MB/s limits via `--max-bandwidth`\n- Zero overhead when disabled (0 = unlimited)\n- **Integrated across ALL copy modes**: buffered, LZ4, Zstd, zero-copy (Linux/macOS)\n- Thread-safe and cloneable\n- Throttle event logging (debug level)\n- 1MB chunks for precise control in zero-copy mode\n\n**Concurrency Control:**\n- Auto-detection based on CPU cores (2× CPU count, max 16)\n- Safe fallback: Defaults to 1 thread if CPU detection fails (restricted environments)\n- Configurable maximum parallel operations via `--parallel`\n- **Integrated into directory copy** with per-file permit acquisition\n- RAII-based permit management (automatic cleanup via Drop)\n- Optimal for I/O-bound operations\n- See [Performance Guide](docs/guides/PERFORMANCE.md) for detailed concurrency tuning\n- Works seamlessly with rayon thread pools\n\n**Dry-Run Capabilities:**\n- Simulate all operations (copy, update, skip, delete, mkdir)\n- Detailed logging via tracing framework\n- Summary statistics with total data size\n- Works with all other features (filters, transformations, etc.)\n\n**Technical Details:**\n- **Implementation**: Integrated existing `BandwidthLimiter` and `ConcurrencyLimiter` classes\n- **Testing**: 177 tests passed, 3 timing-sensitive load tests available with `--ignored`\n- **Monitoring**: Structured logging via `tracing` with debug-level throttle events\n- **Compatibility**: Zero impact on existing functionality, all tests passing\n\n📖 **Full Documentation:** See [`docs/PROGRESS_AND_CONCURRENCY.md`](docs/PROGRESS_AND_CONCURRENCY.md) ⭐ **NEW!**\n\n---\n\n### 🎯 Inclusion/Exclusion Filters: Selective Transfers\n\n**NEW in v0.4.1!** Powerful rsync/rclone-inspired filter system for selective file processing with glob patterns, regex, and exact path matching.\n\n**Features:**\n- **Multiple Pattern Types** — Glob (`*.txt`, `target/**`), Regex (`^src/.*\\.rs$`), Exact paths\n- **Include/Exclude Rules** — Both supported with first-match-wins semantics\n- **Filter Files** — Load reusable filter rules from `.orbitfilter` files\n- **Early Directory Pruning** — Skip entire directory trees efficiently\n- **Cross-Platform** — Consistent path matching across Windows, macOS, Linux\n- **Dry-Run Visibility** — See what would be filtered before actual transfer\n- **Negation Support** — Invert filter actions with `!` prefix\n\n**Use Cases:**\n- ✅ Selective backups (exclude build artifacts, logs, temp files)\n- ✅ Source code transfers (include only source files, exclude dependencies)\n- ✅ Clean migrations (exclude platform-specific files)\n- ✅ Compliance-aware transfers (exclude sensitive files by pattern)\n\n```bash\n# Basic exclude patterns\norbit -s /project -d /backup -R \\\n  --exclude=\"*.tmp\" \\\n  --exclude=\"target/**\" \\\n  --exclude=\"node_modules/**\"\n\n# Include overrides exclude (higher priority)\norbit -s /logs -d /archive -R \\\n  --include=\"important.log\" \\\n  --exclude=\"*.log\"\n\n# Use regex for complex patterns\norbit -s /code -d /backup -R \\\n  --exclude=\"regex:^tests/.*_test\\.rs$\" \\\n  --include=\"**/*.rs\"\n\n# Load filters from file\norbit -s /data -d /backup -R --filter-from=backup.orbitfilter\n\n# Combine with other features\norbit -s /source -d /dest -R \\\n  --include=\"*.rs\" \\\n  --exclude=\"target/**\" \\\n  --check delta \\\n  --compress zstd:3 \\\n  --dry-run\n```\n\n**Filter File Example (`backup.orbitfilter`):**\n```text\n# Include source files (higher priority - checked first)\n+ **/*.rs\n+ **/*.toml\n+ **/*.md\n\n# Exclude build artifacts\n- target/**\n- build/**\n- *.o\n\n# Exclude logs and temp files\n- *.log\n- *.tmp\n\n# Regex for test files\n- regex: ^tests/.*_test\\.rs$\n\n# Exact path inclusion\ninclude path: Cargo.lock\n```\n\n**Pattern Priority:**\n1. Include patterns from `--include` (highest)\n2. Exclude patterns from `--exclude`\n3. Rules from filter file (in file order)\n4. Default: Include (if no rules match)\n\n**Example Filter File:** [`examples/filters/example.orbitfilter`](examples/filters/example.orbitfilter)\n\n📖 **Full Documentation:** See [`docs/FILTER_SYSTEM.md`](docs/FILTER_SYSTEM.md)\n\n---\n\n### 🌐 Protocol Support\n\nOrbit supports multiple storage backends through a **unified backend abstraction layer** that provides a consistent async API across all storage types.\n\n| Protocol | Status | Feature Flag | Description |\n|----------|--------|--------------|-------------|\n| 🗂️ **Local** | 🟢 Stable | Built-in | Local filesystem with zero-copy optimization |\n| 🔐 **SSH/SFTP** | 🟡 Beta | `ssh-backend` | Remote filesystem access via SSH/SFTP with async I/O |\n| ☁️ **S3** | 🟡 Beta | `s3-native` | Amazon S3 and compatible object storage (MinIO, LocalStack) |\n| 🌐 **SMB/CIFS** | 🟡 Beta | `smb-native` | Native SMB2/3 client (pure Rust, v0.11.1, ready for testing) |\n| ☁️ **Azure Blob** | 🟡 Beta | `azure-native` | Microsoft Azure Blob Storage (using object_store crate) |\n| ☁️ **GCS** | 🟡 Beta | `gcs-native` | Google Cloud Storage (using object_store crate) |\n| 🌐 **WebDAV** | 🚧 Planned | - | WebDAV protocol support |\n\n#### 🆕 Unified Backend Abstraction (v0.5.0+ - Streaming API)\n\n**NEW!** Write once, run on any storage backend. The backend abstraction provides a consistent async API with **streaming I/O** for memory-efficient large file transfers:\n\n```rust\nuse orbit::backend::{Backend, LocalBackend, SshBackend, S3Backend, SmbBackend, AzureBackend, GcsBackend, SmbConfig, AzureConfig, GcsConfig};\nuse tokio::fs::File;\nuse tokio::io::AsyncRead;\nuse futures::StreamExt;\n\n// All backends implement the same trait with streaming support\nasync fn copy_file\u003cB: Backend\u003e(backend: \u0026B, src: \u0026Path, dest: \u0026Path) -\u003e Result\u003c()\u003e {\n    // Stream file directly from disk - no memory buffering!\n    let file = File::open(src).await?;\n    let metadata = file.metadata().await?;\n    let reader: Box\u003cdyn AsyncRead + Unpin + Send\u003e = Box::new(file);\n\n    backend.write(dest, reader, Some(metadata.len()), Default::default()).await?;\n    Ok(())\n}\n\n// List directories with streaming (constant memory for millions of entries)\nasync fn list_large_directory\u003cB: Backend\u003e(backend: \u0026B, path: \u0026Path) -\u003e Result\u003c()\u003e {\n    let mut stream = backend.list(path, ListOptions::recursive()).await?;\n\n    while let Some(entry) = stream.next().await {\n        let entry = entry?;\n        println!(\"{}: {} bytes\", entry.path.display(), entry.metadata.size);\n    }\n    Ok(())\n}\n\n// Works with any backend\nlet local = LocalBackend::new();\nlet ssh = SshBackend::connect(config).await?;\nlet s3 = S3Backend::new(s3_config).await?;\nlet smb = SmbBackend::new(SmbConfig::new(\"server\", \"share\")\n    .with_username(\"user\")\n    .with_password(\"pass\")).await?;\nlet azure = AzureBackend::new(\"my-container\").await?;\nlet gcs = GcsBackend::new(\"my-bucket\").await?;\n```\n\n**Features:**\n- ✅ **URI-based configuration**: `ssh://user@host/path`, `s3://bucket/key`, `smb://user@server/share/path`, `azblob://container/path`, `gs://bucket/path`, etc.\n- ✅ **Streaming I/O**: Upload files up to **5TB** to S3 with ~200MB RAM\n- ✅ **Constant Memory Listing**: List millions of S3 objects with ~10MB RAM\n- ✅ **Automatic Multipart Upload**: S3 files ≥5MB use efficient chunked transfers\n- ✅ **Optimized Download**: Sliding window concurrency for 30-50% faster S3 downloads\n- ✅ **Metadata operations**: Set permissions, timestamps, xattrs, ownership\n- ✅ **Extensibility**: Plugin system for custom backends\n- ✅ **Type-safe**: Strong typing with comprehensive error handling\n- ✅ **Security**: Built-in secure credential handling\n\n📖 **Full Guide:** [docs/guides/BACKEND_GUIDE.md](docs/guides/BACKEND_GUIDE.md)\n📖 **Migration Guide:** [docs/guides/BACKEND_STREAMING_GUIDE.md](docs/guides/BACKEND_STREAMING_GUIDE.md) ⭐ **NEW!**\n\n#### SSH/SFTP Remote Access\n\nTransfer files securely over SSH/SFTP with async implementation:\n\n```bash\n# Download from SSH server using agent authentication\norbit --source ssh://user@example.com/remote/file.txt --dest ./file.txt\n\n# Upload to SFTP server (SSH and SFTP URIs are equivalent)\norbit --source ./local-file.txt --dest sftp://example.com/upload/file.txt\n\n# Recursive directory sync with compression\norbit --source /local/photos --dest ssh://backup.server.com/photos/ \\\n  --mode sync --compress zstd:3 --recursive\n\n# Download with resume support for unreliable connections\norbit --source ssh://server.com/large-file.iso --dest ./large-file.iso \\\n  --resume --retry-attempts 10\n```\n\n**SSH/SFTP Features:**\n- ✅ Pure Rust using libssh2 (battle-tested SSH library)\n- ✅ Async I/O with tokio::task::spawn_blocking (non-blocking operations)\n- ✅ Three authentication methods (SSH Agent, Private Key, Password)\n- ✅ Secure credential handling with `secrecy` crate\n- ✅ Connection timeout configuration\n- ✅ Automatic SSH handshake and session management\n- ✅ Full Backend trait implementation (stat, list, read, write, delete, mkdir, rename)\n- ✅ Recursive directory operations\n- ✅ Optional SSH compression for text files\n- ✅ Compatible with all SFTP servers (OpenSSH, etc.)\n- ✅ Resume support with checkpoint recovery\n- ✅ Integration with manifest system\n\n**Authentication Priority:**\n1. **SSH Agent** (Default) — Most secure, no credentials in command history\n2. **Private Key File** — Supports passphrase-protected keys\n3. **Password** — Use only when key-based auth unavailable\n\n📖 **Full Documentation:** See [`docs/guides/PROTOCOL_GUIDE.md`](docs/guides/PROTOCOL_GUIDE.md#-ssh--sftp-production-ready)\n\n#### S3 Cloud Storage (Streaming Optimized)\n\nTransfer files seamlessly to AWS S3 and S3-compatible storage services with **streaming I/O** and advanced features:\n\n```bash\n# Upload to S3 (streams directly from disk, no memory buffering!)\norbit --source /local/dataset.tar.gz --dest s3://my-bucket/backups/dataset.tar.gz\n\n# Download from S3 (optimized sliding window concurrency)\norbit --source s3://my-bucket/data/report.pdf --dest ./report.pdf\n\n# Sync directory to S3 with compression\norbit --source /local/photos --dest s3://my-bucket/photos/ \\\n  --mode sync --compress zstd:3 --recursive\n\n# Use with MinIO\nexport S3_ENDPOINT=http://localhost:9000\norbit --source file.txt --dest s3://my-bucket/file.txt\n```\n\n**S3 Features:**\n- ✅ Pure Rust (no AWS CLI dependency)\n- ✅ **Streaming multipart upload** - Files ≥5MB automatically use multipart with **5TB max file size**\n- ✅ **Constant memory usage** - ~200MB RAM for any file size upload/download\n- ✅ **Optimized downloads** - Sliding window concurrency for 30-50% faster transfers\n- ✅ **Lazy S3 pagination** - List millions of objects with ~10MB RAM\n- ✅ Resumable transfers with checkpoint support\n- ✅ Parallel chunk transfers (configurable)\n- ✅ All storage classes (Standard, IA, Glacier, etc.)\n- ✅ Server-side encryption (AES-256, AWS KMS)\n- ✅ S3-compatible services (MinIO, LocalStack, DigitalOcean Spaces)\n- ✅ Flexible authentication (env vars, credentials file, IAM roles)\n- ✅ Full integration with manifest system\n- ✅ Object versioning and lifecycle management\n- ✅ Batch operations with rate limiting\n- ✅ **Resilience patterns** — Circuit breaker, connection pooling, and rate limiting via Magnetar ⭐\n\n📖 **Full Documentation:** See [`docs/guides/S3_USER_GUIDE.md`](docs/guides/S3_USER_GUIDE.md)\n📖 **Streaming Guide:** See [`docs/guides/BACKEND_STREAMING_GUIDE.md`](docs/guides/BACKEND_STREAMING_GUIDE.md) ⭐ **NEW!**\n\n#### Azure Blob Storage\n\n**NEW in v0.6.0**: Production-ready Azure Blob Storage backend using the industry-standard `object_store` crate.\n\nTransfer files seamlessly to Microsoft Azure Blob Storage with streaming I/O:\n\n```bash\n# Upload to Azure Blob Storage\norbit --source /local/dataset.tar.gz --dest azblob://mycontainer/backups/dataset.tar.gz\n\n# Download from Azure\norbit --source azure://mycontainer/data/report.pdf --dest ./report.pdf\n\n# Sync directory to Azure with compression\norbit --source /local/photos --dest azblob://photos-container/backup/ \\\n  --mode sync --compress zstd:3 --recursive\n\n# Test with Azurite (Azure Storage Emulator)\nexport AZURE_STORAGE_ACCOUNT=\"devstoreaccount1\"\nexport AZURE_STORAGE_KEY=\"Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==\"\norbit --source file.txt --dest azblob://testcontainer/file.txt\n```\n\n**Azure Features:**\n- ✅ Pure Rust using `object_store` crate (used by Apache Arrow DataFusion)\n- ✅ **Unified cloud API** - Same crate powers S3, Azure, and GCS backends\n- ✅ **Streaming I/O** - Memory-efficient transfers for large files\n- ✅ **Environment variable authentication** - Works with AZURE_STORAGE_ACCOUNT + AZURE_STORAGE_KEY\n- ✅ **Connection string support** - Compatible with AZURE_STORAGE_CONNECTION_STRING\n- ✅ **Azurite compatible** - Test locally with Azure Storage Emulator\n- ✅ **URI schemes** - Both `azblob://` and `azure://` supported\n- ✅ **Full Backend trait** - stat, list, read, write, delete, mkdir, rename, exists\n- ✅ **Prefix support** - Virtual directory isolation within containers\n- ✅ **Strong consistency** - Azure Blob Storage guarantees\n- ✅ **Production-ready** - 33% less code than Azure SDK implementation\n\n📖 **Implementation Status:** See [`docs/project-status/AZURE_IMPLEMENTATION_STATUS.md`](docs/project-status/AZURE_IMPLEMENTATION_STATUS.md)\n\n#### Google Cloud Storage\n\n**NEW in v0.6.0**: Production-ready Google Cloud Storage backend using the industry-standard `object_store` crate.\n\nTransfer files seamlessly to Google Cloud Storage with streaming I/O:\n\n```bash\n# Upload to Google Cloud Storage\norbit --source /local/dataset.tar.gz --dest gs://mybucket/backups/dataset.tar.gz\n\n# Download from GCS\norbit --source gcs://mybucket/data/report.pdf --dest ./report.pdf\n\n# Sync directory to GCS with prefix\norbit --source /local/photos --dest gs://mybucket/archives/photos \\\n  --mode sync --resume --parallel 8 --recursive\n```\n\n**Authentication:**\n```bash\n# Service account JSON file (recommended)\nexport GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.json\n\n# Or use service account credentials directly\nexport GOOGLE_SERVICE_ACCOUNT=myaccount@myproject.iam.gserviceaccount.com\nexport GOOGLE_SERVICE_ACCOUNT_KEY=\"-----BEGIN PRIVATE KEY-----\\n...\"\n```\n\n**Features:**\n- ✅ **Service account support** - GOOGLE_APPLICATION_CREDENTIALS or direct credentials\n- ✅ **Streaming I/O** - Memory-efficient large file transfers\n- ✅ **URI schemes** - Both `gs://` and `gcs://` supported\n- ✅ **Full Backend trait** - stat, list, read, write, delete, mkdir, rename, exists\n- ✅ **Prefix support** - Virtual directory isolation within buckets\n- ✅ **Strong consistency** - Google Cloud Storage guarantees\n- ✅ **Production-ready** - Using battle-tested object_store crate (same as Azure and S3)\n\n📖 **Full Documentation:** See [`docs/guides/GCS_USER_GUIDE.md`](docs/guides/GCS_USER_GUIDE.md)\n\n#### SMB/CIFS Network Shares\n\n```bash\n# Copy to SMB share (when available)\norbit --source /local/file.txt --dest smb://user:pass@server/share/file.txt\n\n# Sync directories over SMB\norbit --source /local/data --dest smb://server/backup \\\n  --mode sync --resume --parallel 4 --recursive\n```\n\n**SMB Features:**\n- Pure Rust (no libsmbclient dependency)\n- SMB2/3 only (SMBv1 disabled for security)\n- **Enforced security policies** (RequireEncryption, SignOnly, Opportunistic)\n- Encryption support (AES-128/256-GCM, AES-128/256-CCM)\n- Packet signing (HMAC-SHA256, AES-GMAC, AES-CMAC)\n- Async/await with Tokio\n- Custom port support for non-standard deployments\n- Adaptive chunking (256KB-2MB blocks)\n- Integration with manifest system\n\n---\n\n### 📊 Audit and Telemetry (V3 Unified Observability)\n\n**NEW in v0.6.0**: Enterprise-grade observability with **cryptographic integrity** and distributed tracing.\n\nEvery operation emits structured events for compliance auditing, troubleshooting, and operational monitoring.\n\n#### 🔒 Cryptographic Audit Chaining\n\nOrbit V3 provides **tamper-evident audit logs** using HMAC-SHA256 cryptographic chaining. Any modification, deletion, or reordering of audit events is immediately detectable.\n\n**Enable Secure Audit Logging:**\n```bash\n# Set HMAC secret key (required for cryptographic chaining)\nexport ORBIT_AUDIT_SECRET=$(openssl rand -hex 32)\n\n# Enable audit logging with integrity protection\norbit --source /source --dest /dest --audit-log ./audit.jsonl\n\n# Verify log integrity (detects any tampering)\npython3 scripts/verify_audit.py audit.jsonl\n```\n\n**V3 Event Format (JSONL with HMAC chain):**\n```json\n{\n  \"trace_id\": \"335f8464197139ab59c4494274e55749\",\n  \"span_id\": \"4a63b017626d3de5\",\n  \"timestamp\": \"2025-12-18T11:56:41.722338400Z\",\n  \"sequence\": 0,\n  \"integrity_hash\": \"a70ee3ca57a26eb650d19b4d7ed66d28d3fc187137b8edb182c5ea2d7a8eeee9\",\n  \"payload\": {\n    \"type\": \"file_start\",\n    \"source\": \"/data/file.bin\",\n    \"dest\": \"s3://bucket/backup/file.bin\",\n    \"bytes\": 1048576\n  }\n}\n```\n\n#### 🌍 Distributed Tracing (W3C Trace Context)\n\nOrbit supports **W3C Trace Context** for distributed tracing across microservices and remote transfers.\n\n**Enable OpenTelemetry Export:**\n```bash\n# Export traces to Jaeger/Honeycomb/Datadog\norbit --source /source --dest /dest \\\n  --audit-log ./audit.jsonl \\\n  --otel-endpoint http://jaeger:4317\n```\n\n**Trace correlation features:**\n- **W3C-compliant** trace IDs (32-char hex) and span IDs (16-char hex)\n- **Hierarchical correlation** — trace_id → job_id → file_id → span_id\n- **Cross-service tracing** — Trace transfers across Nucleus, Star, and Sentinel components\n- **Backend instrumentation** — All 45 backend methods emit trace spans (S3, SMB, SSH, local)\n\n#### LLM-Native Debug Logging (Developer Mode)\n\nWhen you need clean, LLM-friendly logs without audit/HMAC or OTel layers, enable the JSON-only debug mode:\n\n```bash\nTEST_LOG=llm-debug RUST_LOG=debug \\\n  cargo test --test integration_tests -- --nocapture\n\n# Or for normal runs\nORBIT_LOG_MODE=llm-debug RUST_LOG=debug \\\n  orbit --source /source --dest /dest\n```\n\n#### 📊 Prometheus Metrics\n\n**Expose metrics for monitoring:**\n```bash\norbit --source /source --dest /dest \\\n  --audit-log ./audit.jsonl \\\n  --metrics-port 9090\n\n# Scrape metrics at http://localhost:9090/metrics\ncurl http://localhost:9090/metrics | grep orbit_\n```\n\n**Available metrics:**\n- `orbit_transfer_retries_total` — Retry attempts by protocol\n- `orbit_backend_latency_seconds` — Backend operation latency (histogram)\n- `orbit_audit_integrity_failures_total` — Audit chain breaks (CRITICAL alert)\n- `orbit_files_transferred_total` — Successful transfers\n- `orbit_bytes_transferred_total` — Total bytes transferred\n\n#### 🛡️ Security \u0026 Compliance Features\n\n- **Tamper detection** — Any modification, deletion, insertion, or reordering detected\n- **Forensic validation** — Verify chain integrity with `verify_audit.py`\n- **Secret management** — HMAC keys via `ORBIT_AUDIT_SECRET` environment variable\n- **Monotonic sequencing** — Events are strictly ordered\n- **Compliance-ready** — SOC 2, HIPAA, GDPR audit trail support\n\n#### 📖 Documentation\n\nSee [docs/observability-v3.md](docs/observability-v3.md) for complete documentation including:\n- Configuration guide (environment variables, CLI flags, TOML config)\n- Integration with Jaeger, Honeycomb, Datadog, Grafana\n- Forensic validation procedures\n- Security best practices\n- Troubleshooting guide\n\n### 🔧 Advanced Transfer Optimizations\n\nSix rsync-inspired features, reimplemented to leverage Orbit's CDC + Star Map architecture:\n\n```bash\n# Sparse files — hole-aware writes for zero-heavy files (VMs, databases)\norbit -s /data/vm.qcow2 -d /backup/ --sparse auto\n\n# Hardlink preservation — detect and recreate hardlink groups\norbit -s /backups/daily/ -d /backups/offsite/ -R --preserve-hardlinks\n\n# In-place updates — modify destination directly (3 safety tiers)\norbit -s /data/large.img -d /backup/large.img --inplace\norbit -s /data/db.mdf -d /backup/db.mdf --inplace --inplace-safety journaled\n\n# Rename detection — find moved files by content, not name\norbit -s /project/ -d /backup/ -R --detect-renames\n\n# Incremental backups — hardlink unchanged files to reference\norbit -s /data/ -d /backups/today/ -R --link-dest /backups/yesterday/\n\n# Batch mode — record once, replay to many destinations\norbit -s /release/ -d /server1/ -R --write-batch update.batch\norbit --read-batch update.batch -d /server2/\n```\n\n| Feature | rsync Equivalent | Orbit Improvement |\n|---------|-----------------|-------------------|\n| `--sparse` | `--sparse` | Zero-cost detection during CDC; works with `--inplace` (rsync can't) |\n| `--preserve-hardlinks` | `-H` | Cross-platform (Unix + Windows FFI) |\n| `--inplace` | `--inplace` | Reflink/journaled/unsafe safety tiers (rsync has none) |\n| `--detect-renames` | `--fuzzy` | Content-aware chunk overlap vs filename similarity |\n| `--link-dest` | `--link-dest` | Chunk-level partial reuse vs all-or-nothing per file |\n| `--write-batch` | `--write-batch` | Content-addressed journal, portable across different destinations |\n\n**Current limitations:** `--sparse` and `--inplace` are mutually exclusive. Compression is incompatible with `--sparse`/`--inplace` (use `--sparse never` and avoid `--inplace`). Delta transfer (`--check delta`) is disabled when `--sparse` or `--inplace` is enabled (Orbit falls back to a full buffered copy). `--detect-renames` and `--link-dest` currently hardlink **exact matches** only; partial-chunk delta basis is planned. `--write-batch` records full-file create entries (no delta ops yet) and requires `--mode copy`.\n\nSee [Advanced Transfer Features](docs/architecture/ADVANCED_TRANSFER.md) for design details.\n\n---\n\n## 🚀 Quick Start\n\n\u003e **⚠️ Alpha Software:** Remember that Orbit is in active development (v0.6.0). Test thoroughly in non-production environments first, and always maintain backups when working with important data.\n\n### Install\n\n```bash\n# From source\ngit clone https://github.com/saworbit/orbit.git\ncd orbit\n\n# Minimal build (local copy only, ~10MB binary) - DEFAULT\ncargo build --release\n\n# With network protocols (S3, SMB, SSH)\ncargo build --release --features network\n\n# With Control Plane API\ncargo build --release --features api\n\n# Full build (everything)\ncargo build --release --features full\n\n# Install to system\nsudo cp target/release/orbit /usr/local/bin/\n\n# Or with cargo install\ncargo install --path .                    # Minimal\ncargo install --path . --features network  # With network\ncargo install --path . --features full    # Everything\n```\n\n\u003e **v0.5+:** Orbit defaults to a minimal build (just local copy with zero-copy optimizations) for fastest compile times and smallest binaries. Network protocols and GUI are opt-in via feature flags.\n\n### Feature Flags \u0026 Binary Sizes\n\n**v0.5-0.6 Performance Improvements:**\n- 🎯 **60% smaller default binary** — Minimal build is ~10MB (was ~50MB)\n- ⚡ **50% faster compilation** — Default build in ~60s (was ~120s)\n- 🔒 **Reduced attack surface** — No web server code in default CLI build\n- 🚀 **2x Delta throughput** — Gear64 hash replaces Adler-32 for better collision resistance\n\n| Feature | Description | Binary Size | Default |\n|---------|-------------|-------------|---------|\n| `zero-copy` | OS-level zero-copy syscalls for maximum speed | +1MB | ✅ Yes |\n| `network` | All network protocols (S3, SMB, SSH, Azure, GCS) | +31MB | ❌ No |\n| `s3-native` | Amazon S3 and compatible storage | +15MB | ❌ No |\n| `smb-native` | Native SMB2/3 network shares | +8MB | ❌ No |\n| `ssh-backend` | SSH/SFTP remote access | +5MB | ❌ No |\n| `azure-native` | Microsoft Azure Blob Storage | +3MB | ❌ No |\n| `gcs-native` | Google Cloud Storage | +3MB | ❌ No |\n| `api` | Control Plane REST API (v2.2.0+) | +15MB | ❌ No |\n| `delta-manifest` | SQLite-backed delta persistence | +3MB | ❌ No |\n| `extended-metadata` | xattr + ownership (Unix/Linux/macOS only) | +500KB | ❌ No |\n| `full` | All features enabled | +50MB | ❌ No |\n\n```bash\n# Minimal: Fast local copies only (~10MB)\ncargo build --release\ncargo install orbit\n\n# Network: Add S3, SMB, SSH, Azure support (~38MB)\ncargo build --release --features network\ncargo install orbit --features network\n\n# GUI: Add web dashboard (~25MB)\ncargo build --release --features gui\ncargo install orbit --features gui\n\n# Full: Everything including network + GUI (~50MB+)\ncargo build --release --features full\ncargo install orbit --features full\n\n# Size-optimized: Maximum compression\ncargo build --profile release-min\n```\n\n### First-Time Setup (NEW in v0.7.0!)\n\n**Interactive Configuration Wizard** - Get started in seconds with the new `orbit init` command:\n\n```bash\n# Run the interactive setup wizard\norbit init\n```\n\n**What it does:**\n1. 🔍 **Scans your system** — Detects CPU cores, RAM, and I/O speed\n2. 💬 **Asks about your use case** — Backup, Sync, Cloud, or Network\n3. ⚙️ **Generates optimal config** — Auto-tuned for your hardware\n4. 🔐 **Creates security secrets** — JWT secret for Web Dashboard (optional)\n5. 💾 **Saves to `~/.orbit/orbit.toml`** — Ready to use immediately\n\n**Example session:**\n```\n🪐 Welcome to Orbit Setup\n   We will scan your system and create an optimized configuration.\n\nScanning system environment...\n  16 CPU cores detected\n  32 GB RAM available\n  I/O throughput: ~450 MB/s\n\nConfiguration Setup\n? What is your primary use case?\n  \u003e Backup (Reliability First)\n    Sync (Speed First)\n    Cloud Upload (Compression First)\n    Network Transfer (Resume + Compression)\n\n✅ Configuration saved to: /home/user/.orbit/orbit.toml\n```\n\n**Pre-configured profiles:**\n- **Backup** → Resume, checksum verification, 5 retries\n- **Sync** → Zero-copy, trust modtime, maximum speed\n- **Cloud** → Zstd compression, 10 retries, exponential backoff\n- **Network** → Resume + compression balanced for SMB/NFS\n\nAfter running `orbit init`, your config is ready! All transfers will use your optimized settings automatically.\n\n📖 **Full Guide:** See [`docs/guides/INIT_WIZARD_GUIDE.md`](docs/guides/INIT_WIZARD_GUIDE.md)\n\n---\n\n### Basic Usage\n\n```bash\n# Simple copy\norbit --source source.txt --dest destination.txt\n\n# Copy with resume and checksum verification\norbit --source large-file.iso --dest /backup/large-file.iso --resume\n\n# Recursive directory copy with compression\norbit --source /data/photos --dest /backup/photos --recursive --compress zstd:3\n\n# Sync with parallel workers\norbit --source /source --dest /destination --mode sync --workers 8 --recursive\n\n# High-concurrency S3 upload (256 workers, 8 parts per file)\norbit --source dataset.tar.gz --dest s3://my-bucket/backups/dataset.tar.gz \\\n  --workers 256 --concurrency 8\n\n# Upload to S3 with execution statistics\norbit --source dataset.tar.gz --dest s3://my-bucket/backups/dataset.tar.gz --stat -H\n\n# Preserve metadata with transformations\norbit --source /data --dest /backup --recursive \\\n  --preserve=times,perms,owners \\\n  --transform=\"case:lower\"\n\n# Selective transfer with filters\norbit --source /project --dest /backup --recursive \\\n  --exclude=\"target/**\" \\\n  --exclude=\"*.log\" \\\n  --include=\"important.log\"\n\n# Use filter file for complex rules\norbit --source /data --dest /backup --recursive \\\n  --filter-from=backup.orbitfilter\n\n# Resilient transfer with retries and logging\norbit --source /data --dest /backup --recursive \\\n  --retry-attempts 5 \\\n  --exponential-backoff \\\n  --error-mode partial \\\n  --log-level debug \\\n  --log /var/log/orbit.log\n\n# Skip failed files for batch operations\norbit --source /archive --dest /backup --recursive \\\n  --error-mode skip \\\n  --verbose\n\n# Preview transfer with dry-run before executing\norbit --source /data --dest /backup --recursive --dry-run --verbose\n\n# Bandwidth-limited transfer with progress tracking\norbit --source /large/dataset --dest /backup --recursive \\\n  --max-bandwidth 10 \\\n  --workers 4 \\\n  --show-progress\n\n# Create flight plan manifest\norbit manifest plan --source /data --dest /backup --output ./manifests\n\n# Batch execution: run multiple operations in parallel\norbit run --file commands.txt --workers 256\n\n# Stream S3 object to stdout (requires s3-native feature)\norbit cat s3://bucket/data/report.csv | head -100\n\n# Upload stdin to S3\ntar czf - /data | orbit pipe s3://bucket/backups/data.tar.gz\n\n# Generate a pre-signed URL (expires in 1 hour)\norbit presign s3://bucket/data/report.csv --expires 3600\n\n# S3 wildcard listing (optimized prefix scan)\norbit --source \"s3://bucket/data/2024-*.parquet\" --dest /local --recursive\n```\n\n---\n\n## ⚡ Performance Benchmarks\n\n### Local Transfer Performance\n\n| File Size | Traditional cp | Orbit (Zero-Copy) | Speedup | CPU Usage |\n|-----------|----------------|-------------------|---------|-----------|\n| 10 MB | 12 ms | 8 ms | 1.5× | ↓ 65% |\n| 1 GB | 980 ms | 340 ms | 2.9× | ↓ 78% |\n| 10 GB | 9.8 s | 3.4 s | 2.9× | ↓ 80% |\n\n**macOS APFS Optimization**: On APFS filesystems (macOS 10.13+), file copies complete **instantly** via Copy-On-Write cloning — regardless of file size! Data is only duplicated when modified, providing near-zero latency for large files.\n\n### S3 Transfer Performance\n\n- **Multipart Upload:** 500+ MB/s on high-bandwidth links\n- **Parallel Workers:** Up to 256 concurrent file operations (configurable via `--workers`)\n- **Per-File Concurrency:** 5 concurrent parts per multipart upload (configurable via `--concurrency`)\n- **Adaptive Chunking:** 5MB-2GB chunks based on file size\n- **Wildcard Optimization:** Prefix-scoped listing with in-memory glob filtering\n- **Resume Efficiency:** Chunk-level verification with intelligent restart decisions\n\n### Compression Performance\n\n- Zstd level 3 → 2.3× faster over networks\n- LZ4 → near-realtime local copies\n- Adaptive selection based on link speed\n\n---\n\n## 🧠 Smart Strategy Selection\n\nOrbit automatically selects the optimal transfer strategy:\n\n```\nSame-disk large file  → Zero-copy (copy_file_range on Linux, APFS cloning on macOS)\nmacOS APFS            → Instant Copy-On-Write cloning (fclonefileat)\nCross-filesystem      → Streaming with buffer pool\nSlow network link     → Compression (zstd/lz4)\nCloud storage (S3)    → Multipart with parallel chunks\nUnreliable network    → Smart resume (detect corruption, revalidate)\nCritical data         → SHA-256 checksum + audit log\nDirectory transfers   → Disk Guardian pre-flight checks\n```\n\nYou can override with explicit flags when needed.\n\n---\n\n## 📈 Use Cases\n\n### Cloud Data Lake Ingestion\n\n```bash\n# Upload analytics data to S3\norbit --source /data/analytics --dest s3://data-lake/raw/2025/ \\\n  --recursive \\\n  --parallel 16 \\\n  --compress zstd:3\n```\n\n**Benefits:** Parallel uploads, compression, checksums, automatic pre-flight checks\n\n### Enterprise Backup\n\n```bash\n# Use manifest system for complex backup jobs\norbit manifest plan --source /data --dest /backup --output ./manifests\norbit manifest verify --manifest-dir ./manifests\n```\n\n**Benefits:** Resume, checksums, parallel jobs, full audit trail, disk space validation\n\n### Hybrid Cloud Migration\n\n```bash\n# Migrate local storage to S3\norbit --source /on-prem/data --dest s3://migration-bucket/data \\\n  --mode sync \\\n  --recursive \\\n  --resume \\\n  --parallel 12\n```\n\n**Benefits:** Resumable, parallel transfers, pre-flight safety checks\n\n### Data Migration\n\n```bash\norbit --source /old-storage --dest /new-storage \\\n  --recursive \\\n  --parallel 16 \\\n  --show-progress\n```\n\n**Benefits:** Parallel streams, verification enabled by default, progress tracking, disk space validation\n\n### Network Shares\n\n```bash\norbit --source /local/files --dest smb://nas/backup \\\n  --mode sync \\\n  --recursive \\\n  --resume \\\n  --retry-attempts 10\n```\n\n**Benefits:** Native SMB, automatic resume, exponential backoff\n\n---\n\n## ⚙️ Configuration\n\n### Configuration File\n\nPersistent defaults via `orbit.toml`:\n\n```toml\n# ~/.orbit/orbit.toml or ./orbit.toml\n\n# Copy mode: \"copy\", \"sync\", \"update\", or \"mirror\"\ncopy_mode = \"copy\"\n\n# Enable recursive directory copying\nrecursive = true\n\n# Preserve file metadata (timestamps, permissions)\npreserve_metadata = true\n\n# Detailed metadata preservation flags (overrides preserve_metadata if set)\n# Options: \"times\", \"perms\", \"owners\", \"xattrs\", \"all\"\npreserve_flags = \"times,perms,owners\"\n\n# Metadata transformation configuration\n# Format: \"rename:pattern=replacement,case:lower,strip:xattrs\"\ntransform = \"case:lower\"\n\n# Strict metadata preservation (fail on any metadata error)\nstrict_metadata = false\n\n# Verify metadata after transfer\nverify_metadata = false\n\n# Enable resume capability for interrupted transfers\nresume_enabled = true\n# Resume persistence is atomic (temp + rename); set ORBIT_RESUME_SLEEP_BEFORE_RENAME_MS for crash simulations\n\n# Enable checksum verification\nverify_checksum = true\n\n# Compression: \"none\", \"lz4\", or { zstd = { level = 5 } }\ncompression = { zstd = { level = 5 } }\n\n# Show progress bar\nshow_progress = true\n\n# Chunk size in bytes for buffered I/O\nchunk_size = 1048576  # 1 MB\n\n# Number of retry attempts on failure\nretry_attempts = 3\n\n# Retry delay in seconds\nretry_delay_secs = 2\n\n# Use exponential backoff for retries\nexponential_backoff = true\n\n# Maximum bandwidth in bytes per second (0 = unlimited)\nmax_bandwidth = 0\n\n# Number of parallel file workers (0 = auto: 256 for network, CPU count for local)\n# Alias: --parallel on CLI\nparallel = 0\n\n# Per-operation concurrency for multipart transfers (parts per file)\nconcurrency = 5\n\n# Show execution statistics at end of run\nshow_stats = false\n\n# Human-readable output (e.g., \"1.5 GiB\" instead of raw bytes)\nhuman_readable = false\n\n# Symbolic link handling: \"skip\", \"follow\", or \"preserve\"\nsymlink_mode = \"skip\"\n\n# Error handling mode: \"abort\" (stop on error), \"skip\" (skip failed files), or \"partial\" (keep partial files for resume)\nerror_mode = \"abort\"\n\n# Log level: \"error\", \"warn\", \"info\", \"debug\", or \"trace\"\nlog_level = \"info\"\n\n# Path to log file (omit for stdout)\n# log_file = \"/var/log/orbit.log\"\n\n# Enable verbose logging (shorthand for log_level = \"debug\")\nverbose = false\n\n# Include patterns (glob, regex, or path - can be specified multiple times)\n# Examples: \"*.rs\", \"regex:^src/.*\", \"path:Cargo.toml\"\ninclude_patterns = [\n    \"**/*.rs\",\n    \"**/*.toml\",\n]\n\n# Exclude patterns (glob, regex, or path - can be specified multiple times)\n# Examples: \"*.tmp\", \"target/**\", \"regex:^build/.*\"\nexclude_patterns = [\n    \"*.tmp\",\n    \"*.log\",\n    \".git/*\",\n    \"node_modules/*\",\n    \"target/**\",\n]\n\n# Load filter rules from a file (optional)\n# filter_from = \"backup.orbitfilter\"\n\n# Dry run mode (don't actually copy)\ndry_run = false\n\n# Use zero-copy system calls when available\nuse_zero_copy = true\n\n# Generate manifests for transfers\ngenerate_manifest = false\n\n# Audit log format: \"json\" or \"csv\"\naudit_format = \"json\"\n\n# Path to audit log file\naudit_log_path = \"/var/log/orbit_audit.log\"\n```\n\n### Configuration Priority\n\n1. CLI arguments (highest)\n2. `./orbit.toml` (project)\n3. `~/.orbit/orbit.toml` (user)\n4. Built-in defaults (lowest)\n\n---\n\n## 🧩 Modular Architecture\n\n### Phase 1: OrbitSystem I/O Abstraction (v0.6.0-alpha.1)\n\n**NEW!** Orbit now features a universal I/O abstraction layer that decouples core logic from filesystem operations.\n\n**Key Components:**\n\n- **`orbit-core-interface`**: Defines the `OrbitSystem` trait\n  - Discovery: `exists()`, `metadata()`, `read_dir()`\n  - Data Access: `reader()`, `writer()`\n  - Compute Offloading: `read_header()`, `calculate_hash()`\n\n- **`LocalSystem`**: Default provider for standalone mode (wraps `tokio::fs`)\n- **`MockSystem`**: In-memory implementation for testing (no disk I/O)\n\n**Benefits:**\n\n- ✅ **Testability**: Unit tests without filesystem via `MockSystem`\n- ✅ **Flexibility**: Runtime switching between Local/Remote providers\n- ✅ **Future-Ready**: Foundation for distributed Grid/Star topology\n- ✅ **Performance**: Compute offloading enables efficient distributed CDC\n\n```rust\nuse orbit::system::LocalSystem;\nuse orbit_core_interface::OrbitSystem;\n\nasync fn example() -\u003e anyhow::Result\u003c()\u003e {\n    let system = LocalSystem::new();\n    let header = system.read_header(path, 512).await?;\n    // Same code works for future RemoteSystem!\n    Ok(())\n}\n```\n\n📖 **See:** [`docs/specs/PHASE_1_ABSTRACTION_SPEC.md`](docs/specs/PHASE_1_ABSTRACTION_SPEC.md)\n\n---\n\n### Crate Structure\n\nOrbit is built from clean, reusable crates:\n\n| Crate | Purpose | Status |\n|-------|---------|--------|\n| 🔌 `orbit-core-interface` | OrbitSystem I/O abstraction (Phase 1) | 🟢 Stable |\n| 🧩 `core-manifest` | Manifest parsing and job orchestration | 🟡 Beta |\n| 🌌 `core-starmap` | Job planner, dependency graph, container packing | 🟡 Beta |\n| 🌌 `core-starmap::universe` | Global deduplication index (V2) | 🔴 Alpha |\n| 🌌 `core-starmap::migrate` | V1→V2 migration utilities | 🔴 Alpha |\n| 🧬 `core-cdc` | FastCDC content-defined chunking (V2) | 🔴 Alpha |\n| 🧠 `core-semantic` | Intent-based replication, composable prioritizers (V2) | 🔴 Alpha |\n| 📊 `core-audit` | Structured logging, telemetry, typed provenance | 🟡 Beta |\n| ⚡ `core-zero-copy` | OS-level optimized I/O | 🟢 Stable |\n| 🗜️ `core-compress` | Compression and decompression | 🟢 Stable |\n| 🛡️ `disk-guardian` | Pre-flight space \u0026 integrity checks | 🟡 Beta |\n| 🧲 `magnetar` | Idempotent job state machine (SQLite + redb) | 🟡 Beta |\n| 🛡️ `magnetar::resilience` | Circuit breaker, connection pool, rate limiter | 🟡 Beta |\n| 🛡️ `core-resilience` | Backpressure, penalization, dead-letter, health monitor, ref-counted GC | 🔴 Alpha |\n| ⭐ `orbit-star` | Star agent with formalized lifecycle hooks | 🔴 Alpha |\n| 📡 `orbit-connect` | Grid connectivity with bulletin board | 🔴 Alpha |\n| 🛡️ `orbit-sentinel` | Autonomous resilience engine (Phase 5 OODA loop) | 🔴 Alpha |\n| 🌐 `protocols` | Network protocol implementations | 🟡 S3/SSH Beta, 🔴 SMB Alpha |\n| 🌐 `orbit-server` | Headless Control Plane API (v2.2.0-alpha) | 🔴 Alpha |\n| 🎨 `orbit-dashboard` | React dashboard (v2.2.0-alpha) | 🔴 Alpha |\n| 🕵️ `core-watcher` | Monitoring beacon | 🚧 Planned |\n| 🧪 `wormhole` | Forward-error correction | 🚧 Planned |\n\nThis structure ensures isolation, testability, and reusability.\n\n---\n\n## 🖥️ Orbit Control Plane v2.2.0-alpha - \"The Separation\"\n\n**Breaking architectural change:** Orbit v2.2.0 separates the monolithic web application into a **headless Control Plane (Rust)** and a **modern Dashboard (React/TypeScript)**, enabling independent deployment, faster iteration, and better scalability.\n\n### Architecture Overview\n\n```\n┌─────────────────────┐\n│  Orbit Dashboard    │  React 18 + Vite + TypeScript\n│  (Port 5173)        │  TanStack Query + React Flow\n└──────────┬──────────┘\n           │ HTTP/WebSocket\n           │ (CORS enabled)\n           ▼\n┌─────────────────────┐\n│  Control Plane API  │  Axum + OpenAPI/Swagger\n│  (Port 8080)        │  JWT Auth + WebSocket\n└──────────┬──────────┘\n           │\n           ▼\n┌─────────────────────┐\n│  Magnetar Database  │  SQLite + redb\n└─────────────────────┘\n```\n\n### Quick Start\n\n**Option 1: Use the launcher scripts** (Easiest)\n\n```bash\n# Unix/Linux/macOS\n./scripts/launch-orbit.sh\n\n# Windows\nscripts\\launch-orbit.bat\n```\n\n**Option 2: Manual startup**\n\n```bash\n# Terminal 1: Start Control Plane\ncd crates/orbit-web\ncargo run --bin orbit-server\n\n# Terminal 2: Start Dashboard\ncd dashboard\nnpm install  # First time only\nnpm run dev\n```\n\n**Access Points:**\n- 🎨 **Dashboard**: http://localhost:5173\n- 🔌 **API**: http://localhost:8080/api\n- 📚 **Swagger UI**: http://localhost:8080/swagger-ui\n- 🔒 **Default credentials**: `admin` / `orbit2025` (⚠️ Change in production!)\n\n**What's Running:**\n- ☢️ **Reactor Engine**: Background job executor (starts automatically with orbit-server)\n- 🎨 **Dashboard Dev Server**: React app with hot reload (port 5173)\n- 🔌 **API Server**: RESTful API with WebSockets (port 8080)\n\n**Browser Safety**: Launch scripts open the dashboard in a **new browser tab** and will **NOT kill your other tabs** when you close the script. Safe to use with your existing browser session!\n\n### 🛰️ E2E Demo Harness - \"Deep Space Telemetry Scenario\"\n\n**NEW in v2.2.0!** Experience Orbit's full capabilities with an automated end-to-end demonstration that showcases real-time job management, visual chunk maps, and live telemetry tracking.\n\n**🛡️ Safety First (Recommended for First-Time Users):**\n\nBefore running the demo, use the safety validator to verify your system is ready **without making any changes**:\n\n```bash\n# Unix/Linux/macOS\n./scripts/validate-demo-safety.sh\n\n# Windows (Git Bash)\nbash scripts/validate-demo-safety.sh\n```\n\nThe validator checks system requirements, port availability, disk space, and shows exactly what the demo will do. See [docs/project-status/SAFETY_FIRST.md](docs/project-status/SAFETY_FIRST.md) for complete safety documentation.\n\n**Option 3: Run the E2E Demo** (Best for first-time users and demonstrations)\n\n```bash\n# Unix/Linux/macOS\n./scripts/demo-orbit.sh\n\n# Windows\nscripts\\demo-orbit.bat\n```\n\n**Requirements:**\n- 💾 **Disk Space:** 4GB free (or 400MB if binaries already built) - [See details](DISK_SPACE_GUIDE.md)\n- ⏱️ **Duration:** ~5-10 minutes (includes build time)\n- 🌐 **Ports:** 8080 (API) and 5173 (Dashboard) must be available\n\n**What the demo does:**\n1. ✅ **Environment Validation** - Verifies Rust, Node.js, and port availability\n2. 📊 **Data Fabrication** - Generates ~170MB of synthetic telescope telemetry data\n3. 🚀 **System Ignition** - Launches both Control Plane and Dashboard\n4. 🎯 **Job Injection** - Programmatically creates and starts a transfer job via REST API\n5. 👁️ **Observation Phase** - Interactive pause to explore the dashboard's Visual Chunk Map and live telemetry graphs\n6. 🧹 **Cleanup** - Gracefully terminates services and removes temporary data\n\n**Features demonstrated:**\n- **Magnetar State Machine** - Job lifecycle management (`pending` → `running` → `completed`)\n- **Real-Time Dashboard** - Visual Chunk Map showing chunk-level transfer progress\n- **Live Telemetry** - Transfer speed graphs and statistics\n- **REST API** - Programmatic job creation and control\n- **Resilient Transfer** - Compression, verification, parallel workers (4 concurrent)\n\n**Perfect for:**\n- 🎬 **Sales Demonstrations** - Show Orbit's capabilities to stakeholders\n- 🧪 **Development Testing** - Validate full-stack functionality quickly\n- 📚 **Training** - Onboard new developers to the architecture\n- 🤖 **CI/CD Integration** - Automated E2E testing in pipelines\n\n📖 **Full Documentation:** See [`DEMO_GUIDE.md`](DEMO_GUIDE.md) for detailed usage, troubleshooting, and customization options.\n\n### Compilation Modes: Headless vs Full UI\n\n**NEW in v2.2.0!** The Control Plane now supports **compile-time modularity** via feature flags, allowing you to build either a lightweight headless API server or a full-featured server with embedded dashboard.\n\n#### Scenario A: Headless Mode (Default)\nBuild a smaller, API-only binary without UI dependencies. Perfect for automation, CI/CD pipelines, or custom frontend integrations.\n\n```bash\n# Minimal binary - no UI, smaller attack surface\ncargo build --release -p orbit-server\n\n# Binary size: ~15MB (vs ~25MB with UI)\n# No static file serving, no dashboard embedded\n```\n\n**Use cases:**\n- Kubernetes/Docker deployments with separate UI CDN\n- API-only microservices\n- Custom dashboard integration\n- Embedded systems with limited storage\n\n#### Scenario B: Full UI Mode\nBuild with embedded dashboard for all-in-one deployment.\n\n```bash\n# Full binary with embedded React dashboard\ncargo build --release -p orbit-server --features ui\n\n# Binary serves dashboard from dashboard/dist\n# Requires: npm run build in dashboard/ first\n```\n\n**Use cases:**\n- Single-binary desktop applications\n- Quick demos and development\n- End-user installations\n- Local workstation deployment\n\n#### Runtime Behavior\n\n**Headless Mode:**\n```\n⚙️ Headless Mode: Dashboard not included, API-only server\nOrbit Control Plane (Headless Mode) - API available at /api/*\n```\n\n**UI Mode:**\n```\n🎨 UI Feature Enabled: Serving embedded dashboard from dashboard/dist\nDashboard available at http://localhost:8080/\n```\n\n### Control Plane Features (v2.2.0-alpha)\n\n#### ✅ OpenAPI-Documented REST API\n- **Swagger UI** at `/swagger-ui` for interactive API testing\n- **Type-safe endpoints** with utoipa schema generation\n- **Job Management**: Create, list, monitor, cancel, delete jobs\n- **Backend Configuration**: Manage S3, SMB, SSH, Local backends\n- **Authentication**: JWT-based auth with httpOnly cookies\n- **Real-time Updates**: WebSocket streams at `/ws/:job_id`\n\n#### ✅ Intelligent Scheduling (Planned)\n- **Duration Estimation**: Predict transfer times based on historical data\n- **Bottleneck Detection**: Proactive warnings for performance issues\n- **Confidence Scoring**: Reliability metrics for time estimates\n- **Priority Queues**: Smart job ordering for critical transfers\n\n#### ✅ Production Security\n- **JWT Authentication** with 24-hour expiration\n- **Argon2 Password Hashing** (OWASP recommended)\n- **Role-Based Access Control** (Admin/Operator/Viewer)\n- **CORS Configuration** for dashboard integration\n- **Environment-based secrets** via `ORBIT_JWT_SECRET`\n\n### Dashboard Features (v2.2.0-rc.1)\n\n#### ✅ Modern React Stack\n- **React 19** with TypeScript for type safety and strict ESLint compliance\n- **Vite 7** for instant hot module replacement (HMR)\n- **TanStack Query** for intelligent data fetching and caching\n- **Tailwind CSS 4** with tailwindcss-animate plugin for professional design and smooth animations\n- **Lucide Icons** for consistent iconography\n- **@xyflow/react 12** for visual pipeline editing\n- **Full-Screen Layout**: Edge-to-edge dashboard design with removed Vite scaffolding constraints\n\n#### ✅ Cockpit-Style App Shell (NEW in Unreleased)\n- **Sidebar Navigation**: Professional persistent sidebar replacing top navigation bar\n- **Live Status Indicator**: Animated pulsing green dot for \"System Online\" confirmation\n- **Pre-Alpha Warning**: Prominent warning banner across all views\n- **Mobile Drawer**: Smooth slide-in menu with backdrop overlay for mobile devices\n- **Responsive Design**: Fully optimized from 320px to 4K displays\n- **Theme Integration**: Dark/light mode toggle with consistent styling\n- **Operator Profile**: Gradient avatar with system status in sidebar\n\n#### ✅ Mission Control Dashboard (NEW in Unreleased - Embedded Visibility)\n- **Live Telemetry**: Real-time network throughput with SVG area charts\n- **Client-Side Buffering**: 30-point rolling history for smooth \"live\" feel\n- **Metric Cards**: Active Jobs, Throughput, System Load, Storage Health with trend indicators\n- **Animated Status**: Pulsing green dot for \"Live Stream Active\" confirmation\n- **Capacity Planning**: Donut chart visualization with used/available space breakdown\n- **Traffic Statistics**: Peak, Average, and Total Transferred metrics\n\n#### ✅ Deep-Dive Job Details (NEW in Unreleased - Embedded Visibility)\n- **Visual Chunk Map**: 100-cell grid showing completion progress with color coding\n- **Glowing Effects**: Green (completed) and red (failed) chunks with shadow effects\n- **Proportional Sampling**: Intelligent downsampling for jobs with \u003e100 chunks\n- **Event Stream**: Real-time lifecycle events with timestamps and status icons\n- **Configuration Display**: Detailed source/destination, mode, compression, verification\n- **Performance Metrics**: Throughput, chunk statistics, and timing data\n- **Breadcrumb Navigation**: \"Job List → Job #N\" with back button\n\n#### ✅ Enhanced Job Management (NEW in Unreleased)\n- **Click-to-Expand**: Select any job to view detailed inspection view\n- **Real-time Search**: Filter jobs by ID, source path, or destination path\n- **Status Filtering**: Dropdown to filter by All/Running/Pending/Completed/Failed\n- **Manual Refresh**: Button for on-demand data refresh\n- **Compact Mode**: Shows 5 most recent jobs for dashboard integration\n- **Enhanced Empty States**: Helpful messaging with icons for better user guidance\n\n#### ✅ Professional File Browser (rc.1)\n- **Click-to-Select** files and folders with visual feedback\n- **Up Navigation** button to traverse parent directories\n- **Folder Selection** button for directory transfers\n- **Visual Indicators**: Selected items highlighted in blue with dark mode support\n- **Loading States**: Spinner and error handling for API calls\n- **RESTful API**: GET `/api/files/list?path={path}` endpoint\n\n#### ✅ Improved Quick Transfer (NEW in Unreleased)\n- **Visual Flow**: Source → destination with animated connector\n- **Color Coding**: Blue borders for source, orange for destination\n- **State Management**: Success/error feedback (no more browser alerts)\n- **Auto-reset**: Form clears automatically after successful transfer\n- **Validation**: Better input validation and loading states\n\n#### ✅ Visual Pipeline Builder\n- **React Flow v12** DAG editor for intuitive job configuration\n- **Drag-and-drop** source and destination nodes\n- **Theme-aware**: Uses design system colors for consistent styling\n- **Icon Toolbar**: Enhanced buttons with Database/Zap/Cloud icons\n- **Node Counter**: Displays current number of nodes and connections\n\n#### ✅ User Management (NEW in Unreleased)\n- **Statistics Dashboard**: Cards showing Total Users, Admins, and Operators\n- **Delete Functionality**: Remove users with confirmation dialogs\n- **Gradient Avatars**: Auto-generated avatars with user initials\n- **Role Badges**: Theme-aware badges for Admin/Operator/Viewer roles\n- **Enhanced Forms**: Better layout with clear field labeling\n\n#### ✅ Smart Data Fetching\n- **Adaptive Polling**: 2s for jobs and health, optimized for responsiveness\n- **Optimistic Updates**: Instant UI feedback on mutations\n- **Automatic Cache Invalidation**: Always shows fresh data\n- **Request Deduplication**: Efficient network usage\n\n#### ✅ Real-time Monitoring\n- **Live Job Status** with progress bars and percentages\n- **Transfer Speed Tracking** with chunk completion metrics\n- **Sparkline Trends**: Visual representation of metric history\n- **Auto-refresh**: Continuous updates for active monitoring\n\n#### ✅ CI/CD Pipeline (rc.1)\n- **Dashboard Quality Control**: Dedicated GitHub Actions job\n  - Prettier formatting checks\n  - ESLint linting (zero warnings)\n  - TypeScript strict type checking\n  - npm security audit (high severity)\n  - Vitest unit tests\n  - Production build verification\n- **Rust Security**: cargo-audit integrated into backend CI\n- **Local Validation**: `npm run ci:check` for pre-push checks\n\n### API Examples\n\n**Create a Job**\n```bash\ncurl -X POST http://localhost:8080/api/jobs \\\n  -H \"Content-Type: application/json\" \\\n  -d '{\n    \"source\": \"/data/backup\",\n    \"destination\": \"s3://bucket/backup\",\n    \"compress\": true,\n    \"verify\": true,\n    \"parallel_workers\": 4\n  }'\n```\n\n**Get Job Status**\n```bash\ncurl http://localhost:8080/api/jobs/1\n```\n\n**WebSocket Monitoring**\n```javascript\nconst ws = new WebSocket('ws://localhost:8080/ws/1');\nws.onmessage = (event) =\u003e {\n  const update = JSON.parse(event.data);\n  console.log('Progress:', update.progress);\n};\n```\n\n### Development\n\n**Backend (Control Plane)**\n```bash\ncd crates/orbit-web\ncargo watch -x 'run --bin orbit-server'  # Auto-reload on changes\ncargo check  # Quick compilation check\ncargo audit  # Security vulnerability scan\n```\n\n**Frontend (Dashboard)**\n```bash\ncd dashboard\nnpm install              # Install dependencies (first time)\nnpm run dev              # Vite HMR enabled\nnpm run ci:check         # Run all checks before pushing\nnpm run format:fix       # Auto-fix code formatting\nnpm run typecheck        # TypeScript validation\nnpm test                 # Run unit tests\n```\n\n**API Documentation**\n```bash\n# Generate and open API docs\ncd crates/orbit-web\ncargo doc --open -p orbit-server\n```\n\n**Pre-Push Checklist**\n```bash\n# Backend\ncargo fmt --all --check\ncargo clippy --all\ncargo test\ncargo audit\n\n# Frontend\ncd dashboard\nnpm run ci:check  # Runs: typecheck + lint + format:check + test\n```\n\n### Configuration\n\n**Environment Variables:**\n```bash\n# Control Plane\nexport ORBIT_SERVER_HOST=0.0.0.0       # Bind address (default: 127.0.0.1)\nexport ORBIT_SERVER_PORT=8080          # API port (default: 8080)\nexport ORBIT_JWT_SECRET=$(openssl rand -base64 32)  # REQUIRED for production\nexport ORBIT_MAGNETAR_DB=magnetar.db   # Job database path\nexport ORBIT_USER_DB=users.db          # Auth database path\n\n# Dashboard\n# Edit dashboard/.env if needed\nVITE_API_URL=http://localhost:8080\n```\n\n### Migration from v1.0 (Nebula)\n\nThe v2.2.0 architecture is a complete rewrite. Key changes:\n\n| v1.0 (Nebula) | v2.2.0 (Control Plane) |\n|---------------|------------------------|\n| Leptos SSR | Axum REST API + React SPA |\n| `orbit-web` binary | `orbit-server` + separate dashboard |\n| Monolithic | Decoupled microservices |\n| Server-side rendering | Client-side rendering |\n| `cargo leptos watch` | `cargo run` + `npm run dev` |\n| `/pkg` WASM assets | Static JSON API |\n\n**Breaking Changes:**\n- `orbit serve` now **only** starts the API (no UI bundled)\n- Dashboard must be hosted separately or via CDN\n- API endpoints remain compatible but are now OpenAPI-documented\n- Authentication flow unchanged (JWT cookies)\n\n### Deployment\n\n**Production Checklist:**\n- [ ] Set `ORBIT_JWT_SECRET` (minimum 32 characters)\n- [ ] Change default admin password\n- [ ] Configure CORS for your dashboard domain\n- [ ] Use HTTPS (reverse proxy recommended: nginx/Caddy)\n- [ ] Set up persistent volumes for databases\n- [ ] Configure firewall rules (allow 8080 for API, 5173 for dev dashboard)\n- [ ] Enable request logging (`RUST_LOG=info`)\n\n**Docker Compose Example** (Coming soon)\n\n### Roadmap\n\n- ✅ v2.2.0-alpha.1 - Basic separation, API refactoring, React scaffolding\n- 🚧 v2.2.0-alpha.2 - Interactive job creation UI, pipeline visual editor\n- 🚧 v2.2.0-beta.1 - Complete dashboard features, duration estimation API\n- 🚧 v2.2.0-rc.1 - Production hardening, performance optimization\n- 🚧 v2.2.0 - Stable release with full documentation\n\n### Troubleshooting\n\n**Control Plane won't start**\n```bash\n# Check if port is in use\nlsof -i :8080  # Unix\nnetstat -ano | findstr :8080  # Windows\n\n# Check logs\nRUST_LOG=debug cargo run --bin orbit-server\n```\n\n**Dashboard can't connect to API**\n```bash\n# Verify Control Plane is running\ncurl http://localhost:8080/api/health\n\n# Check CORS configuration in server.rs\n# Ensure dashboard origin is allowed\n```\n\n**JWT Authentication fails**\n```bash\n# Ensure JWT_SECRET is set\necho $ORBIT_JWT_SECRET\n\n# Generate a new secret\nexport ORBIT_JWT_SECRET=$(openssl rand -base64 32)\n```\n\n### Support \u0026 Documentation\n\n- 📖 **API Docs**: http://localhost:8080/swagger-ui (when running)\n- 📁 **Source**: [crates/orbit-web/](crates/orbit-web/) (Control Plane), [dashboard/](dashboard/) (React app)\n- 📝 **CHANGELOG**: [CHANGELOG.md](CHANGELOG.md#architecture-shift---orbit-control-plane-v220-alpha-breaking)\n- 🐛 **Issues**: [GitHub Issues](https://github.com/saworbit/orbit/issues)\n\n---\n\n## 🔐 Security\n\n- **Safe Path Handling** — Prevents traversal attacks\n- **Checksum Verification** — SHA-256, BLAKE3 for integrity\n- **Credential Protection** — Memory scrubbing on drop, no credential logging\n- **S3 Encryption** — Server-side encryption (AES-256, AWS KMS)\n- **No Telemetry Phone-Home** — All data stays local\n- **AWS Credential Chain** — Secure credential sourcing (IAM roles, env vars, credential files)\n- **Pre-Flight Validation** — Disk Guardian prevents dangerous operations\n- **Future FIPS Support** — Compliance-ready crypto modules\n\n### 🛡️ Dependency Security \u0026 Build Features\n\n**Default Build Security:** The default `cargo build` configuration includes **zero runtime security vulnerabilities**. Our minimal feature set (`zero-copy` only) ensures the smallest possible attack surface.\n\n| Build Configuration | Security Status | Use Case |\n|---------------------|----------------|----------|\n| `cargo build` (default) | ✅ **Zero vulnerabilities** | Production deployments |\n| `cargo build --features api` | ✅ **Zero vulnerabilities** | Web dashboard (SQLite only) |\n| `cargo build --features smb-native` | ⚠️ **Optional advisory** | SMB protocol (see note below) |\n| `cargo build --features full` | ⚠️ **Optional advisory** | Testing \u0026 development only |\n\n**Optional Feature Advisory:** When building with `--features smb-native`, a medium-severity timing side-channel advisory (RUSTSEC-2023-0071) is present in the SMB authentication stack. This requires active exploitation during SMB connections and does not affect other protocols or default builds.\n\n**Security Verification:**\n```bash\n# Verify default build has no active vulnerabilities\ncargo tree -p rsa           # Expected: \"nothing to print\"\ncargo tree -p sqlx-mysql    # Expected: \"package ID not found\"\n```\n\nFor complete security audit results, dependency chain analysis, and mitigation details, see **[SECURITY.md](SECURITY.md#dependency-security-audit)**.\n\n---\n\n## 📖 CLI Quick Reference\n\n**Transfer operations:**\n```bash\norbit --source \u003cPATH\u003e --dest \u003cPATH\u003e [FLAGS]\n```\n\n**Parallelism flags:**\n| Flag | Description | Default |\n|------|-------------|---------|\n| `--workers N` | Parallel file workers (0 = auto) | 0 (auto: 256 network, CPU count local) |\n| `--parallel N` | Alias for `--workers` | 0 |\n| `--concurrency N` | Multipart parts per file | 5 |\n\n**Output flags:**\n| Flag | Description |\n|------|-------------|\n| `--stat` | Print execution statistics at end of run |\n| `-H` / `--human-readable` | Human-readable byte values (e.g., \"1.5 GiB\") |\n| `--json` | JSON Lines output for all operations (machine-parseable) |\n\n**S3 flags:**\n| Flag | Description |\n|------|-------------|\n| `--content-type TYPE` | Set Content-Type header on uploads |\n| `--cache-control VAL` | Set Cache-Control header on uploads |\n| `--acl ACL` | Canned ACL (e.g., `private`, `public-read`, `bucket-owner-full-control`) |\n| `--no-sign-request` | Anonymous access for public S3 buckets |\n| `--use-acceleration` | Enable S3 Transfer Acceleration |\n| `--request-payer` | Access requester-pays buckets |\n| `--aws-profile NAME` | Use a specific AWS credential profile |\n| `--credentials-file PATH` | Custom AWS credentials file path |\n| `--part-size N` | Multipart part size in MiB (default: 50) |\n| `--no-verify-ssl` | Skip SSL certificate verification |\n| `--use-list-objects-v1` | Use ListObjects v1 API for compatibility |\n\n**Conditional copy flags:**\n| Flag | Description |\n|------|-------------|\n| `-n` / `--no-clobber` | Do not overwrite existing files |\n| `--if-size-differ` | Only copy if source and destination sizes differ |\n| `--if-source-newer` | Only copy if source is newer than destination |\n| `--flatten` | Strip directory hierarchy during copy |\n| `--raw` | Disable wildcard expansion (treat patterns as literal keys) |\n| `--force-glacier-transfer` | Force transfer of Glacier-stored objects |\n| `--ignore-glacier-warnings` | Suppress Glacier object warnings |\n\n**Subcommands:**\n```bash\norbit run [--file FILE] [--workers N]     # Batch execution from file/stdin\norbit ls s3://bucket/prefix [-e] [-s]     # List S3 objects\norbit head s3://bucket/key                # Show S3 object metadata\norbit du s3://bucket/prefix [-g]          # Show storage usage\norbit rm s3://bucket/pattern [--dry-run]  # Delete S3 objects\norbit mv s3://src s3://dst                # Move/rename S3 objects\norbit mb s3://bucket-name                 # Create S3 bucket\norbit rb s3://bucket-name                 # Remove S3 bucket\norbit cat s3://bucket/key                 # Stream S3 object to stdout\norbit pipe s3://bucket/key                # Upload stdin to S3\norbit presign s3://bucket/key [--expires] # Generate pre-signed URL\norbit manifest \u003cplan|verify|diff|info\u003e    # Manifest operations\norbit \u003cinit|stats|presets|capabilities\u003e   # Configuration \u0026 info\norbit serve [--addr HOST:PORT]            # Launch web GUI\n```\n\n\u003e **Note:** `cat`, `pipe`, and `presign` require the `s3-native` feature flag.\n\n---\n\n## 🧪 Roadmap\n\n### ✅ Core Features Implemented (v0.4.1 - v0.6.0)\n\n**Stable/Well-Tested:**\n- Zero-copy system calls (Linux, macOS, Windows)\n- Compression engines (LZ4, Zstd)\n- Checksum verification (SHA-256, BLAKE3)\n- Modular crate architecture\n\n**Beta/Recently Added (needs more real-world testing):**\n- Resume and retry with chunk-level verification\n- Native S3 support with multipart transfers\n- SSH/SFTP backend\n- S3-compatible storage (MinIO, LocalStack)\n- Disk Guardian: Pre-flight space \u0026 integrity checks\n- Magnetar: Idempotent job state machine with SQLite + redb backends\n- Magnetar Resilience Module: Circuit breaker, connection pooling, rate limiting\n- Delta Detection: rsync-inspired efficient transfers with block-based diffing\n- Metadata Preservation \u0026 Transformation\n- Inclusion/Exclusion Filters: Glob, regex, and path patterns\n- Progress Reporting \u0026 Operational Controls: Bandwidth limiting, concurrency control\n- Manifest + Starmap + Audit integration\n- Structured telemetry with JSON Lines\n\n**Alpha/Experimental:**\n- V2 Architecture (CDC, semantic replication, global dedup)\n- SMB2/3 native implementation (awaiting upstream fix)\n- **Orbit Control Plane v2.2.0-alpha.2** with React Dashboard\n  - Visual Pipeline Editor (React Flow)\n  - Interactive File Browser with filesystem navigation\n  - Job Management UI with real-time progress tracking\n  - REST API with OpenAPI documentation\n\n### 🚧 In Progress (v0.6.0)\n\n- Stabilizing V2 architecture components (CDC, semantic replication)\n- Expanding test coverage for newer features\n- Real-world validation of S3 and SSH backends\n- Enhanced CLI with subcommands\n- Web GUI interactive dashboard (Nebula beta)\n\n### 🔮 Planned (v0.6.0+)\n\n#### CLI Improvements\n- Friendly subcommands (`orbit cp`, `orbit sync`, `orbit run`) as aliases\n- Protocol-specific flags (`--smb-user`, `--region`, `--storage-class`)\n- File watching mode (`--watch`)\n- Interactive mode with prompts\n\n#### New Protocols\n- WebDAV protocol support\n\n#### Advanced Features\n- Wormhole FEC module for lossy networks\n- REST orchestration API\n- Job scheduler with cron-like syntax\n- Plugin framework for custom protocols\n- S3 Transfer Acceleration\n- CloudWatch metrics integration\n- Disk quota integration\n\n---\n\n## 🦀 Contributing\n\nPull requests welcome! See `CONTRIBUTING.md` for code style and guidelines.\n\n### Development\n\n```bash\n# Clone and build (includes S3, SMB, SSH by default)\ngit clone https://github.com/saworbit/orbit.git\ncd orbit\ncargo build\n\n# Run tests (includes S3 backend tests)\ncargo test\n\n# Run with all features (adds extended-metadata, delta-manifest)\ncargo build --features full\ncargo test --features full\n\n# Minimal build (no network backends or GUI)\ncargo build --no-default-features --features zero-copy\n\n# Format and lint\ncargo fmt\ncargo clippy\n```\n\n### Areas We Need Help\n\n- 🌐 Resolving SMB upstream dependencies\n- 🧪 Testing on various platforms\n- 📚 Documentation improvements\n- 🐛 Bug reports and fixes\n\n---\n\n## 📚 Documentation\n\n### User Guides\n- **Quick Start:** This README\n- **🎨 Control Plane v2.2.0-alpha.2 Deployment:** [`DEPLOYMENT_GUIDE_V2.2.0-alpha.2.md`](DEPLOYMENT_GUIDE_V2.2.0-alpha.2.md) ⭐ **NEW!**\n- **Nebula MVP Summary:** [`crates/orbit-web/NEBULA_MVP_SUMMARY.md`](crates/orbit-web/NEBULA_MVP_SUMMARY.md) ⭐ **v1.0.0-alpha.2**\n- **Nebula Changelog:** [`crates/orbit-web/CHANGELOG.md`](crates/orbit-web/CHANGELOG.md) ⭐ **NEW!**\n- **Nebula README:** [`crates/orbit-web/README.md`](crates/orbit-web/README.md) ⭐ **v1.0.0-alpha.2**\n- **Web Dashboard (v2.2.0):** See Control Plane documentation\n- **GUI Integration:** [`docs/GUI_INTEGRATION.md`](docs/GUI_INTEGRATION.md)\n- **Testing \u0026 Validation Scripts:** [`docs/guides/TESTING_SCRIPTS_GUIDE.md`](docs/guides/TESTING_SCRIPTS_GUIDE.md) ⭐ **NEW!**\n- **S3 Guide:** [`docs/guides/S3_USER_GUIDE.md`](docs/guides/S3_USER_GUIDE.md)\n- **GCS Guide:** [`docs/guides/GCS_USER_GUIDE.md`](docs/guides/GCS_USER_GUIDE.md)\n- **Disk Guardian:** [`docs/architecture/DISK_GUARDIAN.md`](docs/architecture/DISK_GUARDIAN.md)\n- **Magnetar:** [`crates/magnetar/README.md`](crates/magnetar/README.md) ⭐ **NEW!**\n- **Resilience Module:** [`crates/magnetar/src/resilience/README.md`](crates/magnetar/src/resilience/README.md) ⭐ **NEW!**\n- **Data Flow Patterns:** [`docs/architecture/DATA_FLOW_PATTERNS.md`](docs/architecture/DATA_FLOW_PATTERNS.md) ⭐ **NEW!**\n- **Delta Detection:** [`docs/guides/DELTA_DETECTION_GUIDE.md`](docs/guides/DELTA_DETECTION_GUIDE.md) and [`docs/guides/DELTA_QUICKSTART.md`](docs/guides/DELTA_QUICKSTART.md) ⭐ **NEW!**\n- **Filter System:** [`docs/guides/FILTER_SYSTEM.md`](docs/guides/FILTER_SYSTEM.md) ⭐ **NEW!**\n- **Progress \u0026 Concurrency:** [`docs/architecture/PROGRESS_AND_CONCURRENCY.md`](docs/architecture/PROGRESS_AND_CONCURRENCY.md) ⭐ **NEW!**\n- **Resume System:** [`docs/architecture/RESUME_SYSTEM.md`](docs/architecture/RESUME_SYSTEM.md)\n- **Protocol Guide:** [`docs/guides/PROTOCOL_GUIDE.md`](docs/guides/PROTOCOL_GUIDE.md)\n\n### Technical Documentation\n- **SMB Status:** [`docs/SMB_NATIVE_STATUS.md`](docs/SMB_NATIVE_STATUS.md)\n- **Manifest System:** [`docs/MANIFEST_SYSTEM.md`](docs/MANIFEST_SYSTEM.md)\n- **Zero-Copy Guide:** [`docs/ZERO_COPY.md`](docs/ZERO_COPY.md)\n- **Magnetar Quick Start:** [`crates/magnetar/QUICKSTART.md`](crates/magnetar/QUICKSTART.md) ⭐ **NEW!**\n- **Resilience Patterns:** [`crates/magnetar/src/resilience/README.md`](crates/magnetar/src/resilience/README.md) ⭐ **NEW!**\n- **Data Flow Patterns:** [`docs/architecture/DATA_FLOW_PATTERNS.md`](docs/architecture/DATA_FLOW_PATTERNS.md) ⭐ **NEW!**\n- **API Reference:** Run `cargo doc --open`\n\n### Examples\n- **Basic Examples:** [`examples/`](examples/) directory\n- **S3 Examples:** [`examples/s3_*.rs`](examples/)\n- **Disk Guardian Demo:** [`examples/disk_guardian_demo.rs`](examples/disk_guardian_demo.rs)\n- **Magnetar Examples:** [`crates/magnetar/examples/`](crates/magnetar/examples/) ⭐ **NEW!**\n- **Resilience Demo:** [`crates/magnetar/examples/resilience_demo.rs`](crates/magnetar/examples/resilience_demo.rs) ⭐ **NEW!**\n- **Filter Example:** [`examples/filters/example.orbitfilter`](examples/filters/example.orbitfilter) ⭐ **NEW!**\n- **Progress Demo:** [`examples/progress_demo.rs`](examples/progress_demo.rs)\n\n---\n\n## 🕵️ Watcher / Beacon\n\n**Status:** 🚧 Planned for v0.6.0+\n\nA companion service that will monitor Orbit runtime health:\n\n**Planned Features:**\n- Detect stalled transfers\n- Track telemetry and throughput\n- Trigger recovery actions\n- Prometheus-compatible metrics export\n\nThis feature is currently in the design phase. See the [roadmap](#-roadmap) for details.\n\n---\n\n## 📜 License\n\n**Apache License 2.0**\n\nOrbit is licensed under the Apache License, Version 2.0 - a permissive open source license that allows you to:\n\n- ✅ **Use** commercially and privately\n- ✅ **Modify** and distribute\n- ✅ **Patent use** - grants patent rights\n- ✅ **Sublicense** to third parties\n\n**Requirements:**\n- **License and copyright notice** - Include a copy of the license and copyright notice with the software\n- **State changes** - Document significant changes made to the code\n\n**Limitations:**\n- ❌ **Liability** - The license includes a limitation of liability\n- ❌ **Warranty** - The software is provided \"as is\" without warranty\n- ❌ **Trademark use** - Does not grant rights to use trade names or trademarks\n\n📄 **Full license text:** See [LICENSE](LICENSE) or http://www.apache.org/licenses/LICENSE-2.0\n\n```\nCopyright 2024 Shane Wall\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n```\n\n---\n\n## 🙏 Acknowledgments\n\n- Built with ❤️ in Rust\n- Inspired by rsync, rclone, and modern transfer tools\n- Thanks to the Rust community for excellent crates\n- AWS SDK for Rust team for the excellent S3 client\n- Special thanks to contributors and testers\n\n---\n\n\u003cdiv align=\"center\"\u003e\n\n### Made with ❤️ and 🦀 by [Shane Wall](https://github.com/saworbit)\n\n**Orbit — because your data deserves to travel in style.** ✨\n\n[⬆ Back to Top](#-orbit)\n\n\u003c/div\u003e\n","funding_links":["https://buymeacoffee.com/sawrion"],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsaworbit%2Forbit","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fsaworbit%2Forbit","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsaworbit%2Forbit/lists"}