{"id":47083807,"url":"https://github.com/magenx/kuberaptor","last_synced_at":"2026-04-09T20:16:58.095Z","repository":{"id":341805224,"uuid":"1169824469","full_name":"magenx/kuberaptor","owner":"magenx","description":"Hetzner K3s production-ready private clusters created with a single command. No programming required, no complexity.","archived":false,"fork":false,"pushed_at":"2026-03-03T11:02:49.000Z","size":182,"stargazers_count":1,"open_issues_count":1,"forks_count":0,"subscribers_count":1,"default_branch":"main","last_synced_at":"2026-03-03T14:16:48.751Z","etag":null,"topics":["automation","cloud","cluster","devops","docker","go","golang","hcloud","hetzner","hetzner-cloud","iac","k3s","k8s","kubernetes"],"latest_commit_sha":null,"homepage":"https://kuberaptor.com/","language":"Go","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/magenx.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":"SECURITY.md","support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2026-03-01T09:20:39.000Z","updated_at":"2026-03-03T11:02:52.000Z","dependencies_parsed_at":null,"dependency_job_id":null,"html_url":"https://github.com/magenx/kuberaptor","commit_stats":null,"previous_names":["magenx/kuberaptor"],"tags_count":null,"template":false,"template_full_name":null,"purl":"pkg:github/magenx/kuberaptor","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/magenx%2Fkuberaptor","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/magenx%2Fkuberaptor/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/magenx%2Fkuberaptor/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/magenx%2Fkuberaptor/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/magenx","download_url":"https://codeload.github.com/magenx/kuberaptor/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/magenx%2Fkuberaptor/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":30417930,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-03-12T07:30:13.030Z","status":"ssl_error","status_checked_at":"2026-03-12T07:29:54.885Z","response_time":114,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["automation","cloud","cluster","devops","docker","go","golang","hcloud","hetzner","hetzner-cloud","iac","k3s","k8s","kubernetes"],"created_at":"2026-03-12T07:30:21.519Z","updated_at":"2026-04-05T16:03:41.073Z","avatar_url":"https://github.com/magenx.png","language":"Go","readme":"## Kuberaptor - Hetzner K3s Cluster\n\u003e [kube`raptor]\n\n\u003cp align=\"center\"\u003e\n\u003cimg width=\"324\" height=\"170\" alt=\"Hetzner-Logo-Hosted_in_Europe\" src=\"https://github.com/user-attachments/assets/22d71337-fab9-4949-b153-e5614a1f5114\" /\u003e\n\u003cimg width=\"120\" height=\"130\" alt=\"k3s\" src=\"https://github.com/user-attachments/assets/226f1609-37e1-4b64-b868-9c034840f9fc\" /\u003e\n\u003cimg width=\"224\" height=\"224\" alt=\"kuberaptor-cardboard\" src=\"https://github.com/user-attachments/assets/fcd1d643-1768-4007-9633-fb5585d03e91\" /\u003e\n\n\u003c/p\u003e\n\n## 🎉 Project Status: **WIP**\n### An independent open source project, not affiliated with Hetzner Online GmbH.\nThe Kuberaptor project is a Kubernetes cluster management tool written in Go, providing automated cluster creation, management, and operations on Hetzner Cloud infrastructure.  \n\n📚 **[Documentation overview](https://deepwiki.com/magenx/kuberaptor)**  \n**[Website](https://www.kuberaptor.com/)**   \n**[Linkedin](https://www.linkedin.com/pulse/kuberaptor-production-ready-k3s-clusters-denis-z%25C3%25B5kov-903kf)**   \n\n---\n\n## 📊 Project Statistics\n\n| Metric | Value | Description |\n|--------|-------|-------------|\n| **Language** | **Go 1.24** | Modern, efficient, compiled language |\n| **Startup Time** | **\u003c10ms** | Instant binary startup |\n| **Binary Size** | **~12MB** | Compact single executable |\n| **Build Time** | **~20 sec** | Fast development iteration |\n| **Dependencies** | **Static binary** | Zero runtime dependencies |\n| **Test Coverage** | **Comprehensive** | Unit and integration tests included |\n| **Configuration** | **YAML** | Full syntax support |\n---\n\n## ✅ Core Features\n\n### Cluster Operations\n\n**1. Cluster Creation** ✅\n- Multi-master high availability with embedded etcd\n- Private network provisioning and configuration\n- Multiple worker pools with custom labels and taints\n- Placement groups for high availability (spread servers across physical hosts)\n- Automated SSH key management\n- K3s installation using official installation script\n- Load balancer creation for Kubernetes API access\n- Automated firewall configuration\n- Kubeconfig retrieval and local save\n- Comprehensive progress logging\n\n**2. Cluster Upgrade** ✅\n- System-upgrade-controller integration\n- Rolling upgrades with configurable concurrency\n  - Masters: 1 node at a time for stability\n  - Workers: 2 nodes at a time for efficiency\n- Automated upgrade plan generation\n- Post-upgrade health checks\n- Node cordoning during upgrades\n- Real-time progress monitoring\n\n**3. Cluster Deletion** ✅\n- Complete resource cleanup\n- Removes all associated resources:\n  - Servers (masters and workers)\n  - Private networks\n  - Load balancers\n  - Firewalls\n  - SSH keys\n  - Placement groups\n- Label-based resource discovery\n- Safe deletion with confirmation\n\n**4. Command Execution** ✅\n- Parallel command execution across cluster nodes\n- Goroutine-based concurrency for performance\n- Synchronized execution with sync.WaitGroup\n- Script file execution support\n- Per-node output display with clear formatting\n- Success/failure tracking per node\n\n**5. Release Management** ✅\n- K3s version fetching from GitHub API\n- Intelligent 7-day caching mechanism\n- Pagination support for large release lists\n- Version filtering and display\n\n### Infrastructure Components\n\n**1. Hetzner Cloud Integration** ✅\n- Official Hetzner Cloud Go SDK v2 integration\n- Complete server lifecycle management\n- Network management (creation, deletion, configuration)\n- Firewall management with rule configuration\n- Load balancer management with health checks\n- Placement group management for high availability\n- SSH key management\n- Location and instance type queries\n- Action waiting and status verification\n\n**2. Configuration System** ✅\n- Complete YAML configuration model\n- Configuration loader with intelligent defaults\n- Path expansion for SSH keys and kubeconfig\n- Comprehensive validation framework\n- Environment variable support\n- Schema validation\n\n**3. Cloud-Init Integration** ✅\n- Template-based cloud-init generation\n- Master and worker node initialization\n- Network configuration\n- K3s installation automation\n\n**4. Add-ons Management** ✅\n- Hetzner Cloud Controller Manager\n- Hetzner CSI Driver\n- System Upgrade Controller\n- Cluster Autoscaler support\n- Cilium CNI (optional alternative to Flannel)\n\n**5. Utilities** ✅\n- SSH client with connection pooling\n- Command execution and file transfer\n- Server readiness checks\n- Shell execution with output streaming\n- File operations (SSH keys, kubeconfig)\n- Logging utilities with configurable levels\n- Retry logic with exponential backoff\n\n### Network Security\n\n**1. Load Balancer** ✅\n- Kubernetes API load balancer creation\n- Global load balancer for application traffic\n- HTTPS/TLS support with managed SSL certificates\n- Automatic master node targeting\n- TCP health checks (15s interval, 10s timeout, 3 retries)\n- Public IPv4/IPv6 support\n- Automatic DNS configuration\n- Cluster-specific labeling\n\n**2. SSL Certificates** ✅\n- Managed SSL certificate creation via Hetzner\n- Automatic DNS validation using Hetzner DNS zones\n- Root domain and wildcard subdomain coverage (example.com and *.example.com)\n- Automatic attachment to HTTPS load balancer services\n- Background certificate issuance (up to 5 minutes)\n- Certificate lifecycle management (create/delete)\n- Certificate preservation option to avoid Let's Encrypt rate limits during cluster recreation\n\n**3. DNS Zone Management** ✅\n- Automated DNS zone creation in Hetzner DNS\n- Configurable TTL values\n- Nameserver information display\n- Integration with SSL certificate validation\n- Cluster-specific zone labeling\n\n**4. Firewall** ✅\n- SSH access control from configured networks\n- API access control from configured networks\n- Full internal network communication (TCP/UDP/ICMP)\n- CIDR notation support and validation\n- Automatic security rule generation\n- Dynamic rule updates\n\n**5. Release Management** ✅\n- K3s version fetching from GitHub API\n- Intelligent 7-day caching mechanism\n- Pagination support for large release lists\n- Version filtering and display\n\n---\n\n## 🏗️ Architecture\n\n### Project Structure\n```\nkuberaptor/\n├── cmd/kuberaptor/              # CLI application entry point\n│   ├── main.go                   # Application initialization\n│   └── commands/                 # Cobra CLI commands\n│       ├── root.go               # Root command and global flags\n│       ├── create.go             # Cluster creation command\n│       ├── delete.go             # Cluster deletion command\n│       ├── upgrade.go            # Cluster upgrade command\n│       ├── run.go                # Command execution on nodes\n│       ├── releases.go           # K3s release listing\n│       ├── budget.go             # Cluster cost estimation\n│       ├── config.go             # Configuration file generator\n│       └── completion.go         # Shell completion generation\n│\n├── internal/                     # Private application code\n│   ├── cluster/                  # Cluster operations (core logic)\n│   │   ├── create_enhanced.go    # Cluster creation\n│   │   ├── delete.go             # Cluster deletion\n│   │   ├── upgrade_enhanced.go   # Cluster upgrades\n│   │   ├── run_enhanced.go       # Parallel command execution\n│   │   ├── network_resources.go  # Load balancer \u0026 firewall\n│   │   ├── budget.go             # Cost estimation logic\n│   │   └── helpers.go            # Shared helper functions\n│   │\n│   ├── config/                   # Configuration management\n│   │   ├── main.go               # Main configuration structure\n│   │   ├── loader.go             # Configuration file loader\n│   │   ├── validator.go          # Configuration validation\n│   │   ├── generator.go          # Sample config generation\n│   │   ├── networking.go         # Network configuration\n│   │   ├── node_pool.go          # Node pool configuration\n│   │   ├── load_balancer.go      # Load balancer configuration\n│   │   ├── dns_zone.go           # DNS zone configuration\n│   │   ├── ssl_certificate.go    # SSL certificate configuration\n│   │   ├── api_load_balancer.go  # API load balancer configuration\n│   │   └── datastore_addons.go   # Datastore and addon configs\n│   │\n│   ├── cloudinit/                # Cloud-init template generation\n│   │   └── generator.go          # Template rendering for nodes\n│   │\n│   ├── addons/                   # Kubernetes addon management\n│   │   ├── installer.go          # Addon installation orchestration\n│   │   ├── csi_driver.go         # Hetzner CSI driver\n│   │   ├── cloud_controller_manager.go  # Hetzner CCM\n│   │   ├── system_upgrade_controller.go # Upgrade controller\n│   │   ├── cluster_autoscaler.go # Cluster autoscaler\n│   │   └── cilium.go             # Cilium CNI addon\n│   │\n│   └── util/                     # Utility functions\n│       ├── ssh.go                # SSH client implementation\n│       ├── shell.go              # Shell command execution\n│       ├── file.go               # File operations\n│       ├── kubectl.go            # kubectl helper\n│       ├── progress.go           # Progress reporting\n│       ├── progress_reporter.go  # Progress reporter abstraction\n│       └── toolinstaller.go      # Tool installation helpers\n│\n└── pkg/                          # Public reusable libraries\n    ├── hetzner/                  # Hetzner Cloud API wrapper\n    │   └── client.go             # Complete API client\n    │\n    ├── k3s/                      # K3s operations\n    │   └── k3s.go                # Release fetcher, token generation\n    │\n    ├── templates/                # Template rendering\n    │   └── templates.go          # Go template system\n    │\n    └── version/                  # Version information\n        └── version.go            # Build-time version injection\n```\n\n### Key Design Principles\n\n1. **Modularity**: Clear separation between CLI, business logic, and infrastructure\n2. **Concurrency**: Goroutines for parallel operations and performance\n3. **Error Handling**: Explicit error returns and comprehensive error messages\n4. **Type Safety**: Strong typing throughout with interfaces for abstraction\n5. **Testability**: Unit and integration tests with clear boundaries\n6. **Configuration**: YAML-based with validation and defaults\n\n---\n\n## 🎯 CLI Commands\n\nAll commands support both configuration files and command-line flags.\n\n| Command | Description | Status |\n|---------|-------------|--------|\n| `create` | Create a new Kubernetes cluster on Hetzner Cloud | Ready |\n| `delete` | Delete an existing cluster and all resources | Ready |\n| `upgrade` | Upgrade cluster to a new k3s version | Ready |\n| `run` | Execute commands or scripts on cluster nodes | Ready |\n| `releases` | List available k3s versions from GitHub | Ready |\n| `budget` | Show estimated monthly cost of cluster resources | Ready |\n| `config` | Generate sample configuration file | Ready |\n| `version` | Display application version information | Ready |\n| `completion` | Generate shell completion scripts | Ready |\n\n### Global Flags\n\n- `--config` - Path to configuration file (YAML)\n- `--verbose` - Enable verbose logging\n- `--help` - Display help information\n\n---\n\n## 🚀 Usage Examples\n\n### 1. Generate Configuration File\n\nBefore creating a cluster, generate a sample configuration file:\n\n```bash\n# Generate sample configuration in current directory\nkuberaptor config --generate\n\n# Generate with custom output path\nkuberaptor config --generate --output my-cluster.yaml\n\n# Short form\nkuberaptor config -g -o my-cluster.yaml\n```\n\nThis creates a fully documented YAML configuration file with sensible defaults that you can customize for your cluster.\n\n### 2. Create Cluster | Advaced full config\n\n**Configuration File (cluster.yaml):**\n\n```yaml\n# Hetzner Cloud API Token\nhetzner_token: \u003cyour_hetzner_cloud_token_here\u003e\n\n# Cluster Configuration\n# Configuration also supports YAML anchors\n\ncluster_name: \u0026cluster_name mykubic # Cluster name with yaml anchor\nkubeconfig_path: ~/.kube/config\nk3s_version: v1.35.3+k3s1\n\ndomain: \u0026domain example.com   # Optional: Required for DNS zone and SSL certificate (\u0026domain yaml anchor)\nlocations: \u0026locations         # Optional: \u0026locations yaml anchor\n  - nbg1\n\nimage: \u0026image debian-13       # Optional: node OS image with \u0026image yaml anchor\nautoscaling_image: *image     # Optional: autoscaler node OS image with *image yaml anchor\n\n# Advanced Cluster Settings (Optional)\nschedule_workloads_on_masters: false              # Allow workloads on master nodes (default: false)\ninclude_instance_type_in_instance_name: false     # Include instance type in names (default: false)\nprotect_against_deletion: true                    # Prevent accidental deletion (default: true)\nk3s_upgrade_concurrency: 1                        # Number of nodes to upgrade in parallel (default: 1)\ngrow_root_partition_automatically: true           # Auto-grow root partition (default: true)\n\n# API Load Balancer Configuration (Optional)\napi_server_hostname: api.example.com              # Custom API server hostname\napi_load_balancer:\n  enabled: true                                   # Create load balancer for Kubernetes API (default: false)\n  hetzner:                                        # Extra Hetzner Cloud metadata labels (optional) Default cluster labels always applied\n    labels:\n      - key: cluster_id\n        value: \"123456\"\n      - key: environment\n        value: production\n\ndatastore:\n  mode: \"etcd\"\n  embedded_etcd:\n    snapshot_retention: 24\n    snapshot_schedule_cron: \"0 * * * *\"\n    s3_enabled: true\n    s3_endpoint: \"nbg1.your-objectstorage.com\"\n    s3_region: nbg1\n    s3_bucket: *cluster_name                     # S3 bucket name as cluster name yaml anchor\n    s3_folder: \"etcd-snapshot\"\n    s3_access_key: \"xxx\"                         # your hetzner s3 storage keys\n    s3_secret_key: \"xxx\"                         # your hetzner s3 storage keys\n\n# Networking Configuration\nnetworking:\n  # CNI Configuration (Optional - defaults to Flannel)\n  cni:\n    enabled: false                      # Set to true to use custom CNI (default: false, uses Flannel)\n    mode: flannel                       # Options: flannel, cilium\n    cilium:                             # Cilium-specific configuration\n      enabled: true                     # Enable Cilium CNI\n      version: \"v1.19.2\"                # Cilium version\n      encryption_type: wireguard        # Options: wireguard, ipsec\n      routing_mode: tunnel              # Options: tunnel, native\n      tunnel_protocol: vxlan            # Options: vxlan, geneve\n      hubble_enabled: true              # Enable Hubble observability\n      hubble_relay_enabled: true\n      hubble_ui_enabled: true\n      k8s_service_host: 127.0.0.1\n      k8s_service_port: 6444\n      operator_replicas: 1\n      operator_memory_request: 128Mi\n      agent_memory_request: 512Mi\n      egress_gateway_enabled: false\n      hubble_metrics:\n        - dns\n        - drop\n        - tcp\n        - flow\n        - port-distribution\n        - icmp\n        - http\n\n  # SSH key configuration (ssh-keygen -t ed25519 -C \"my kubernetes cluster\")\n  ssh:\n    port: 22\n    use_agent: false\n    public_key_path: ~/.ssh/id_ed25519.pub\n    private_key_path: ~/.ssh/id_ed25519\n\n  # Private network for cluster nodes\n  private_network:\n    enabled: true\n    subnet: 10.0.0.0/16\n    existing_network_name: \"\"\n    # NAT Gateway for private cluster configuration (bastion | jump host)\n    nat_gateway:\n      enabled: true\n      instance_type: \"cx23\"\n      locations: *locations           # Could use yaml anchor *locations\n      # Additional custom hetzner labels can be used with all resources\n      # Default cluster labels always applied\n      hetzner:                        # Hetzner Cloud metadata labels (optional)\n        labels:\n          - key: cluster_id\n            value: \"123456\"\n          - key: environment\n            value: production\n\n  # Public network disabled\n  public_network:\n    ipv4:\n      enabled: false\n    ipv6:\n      enabled: false\n  \n  # Access control lists\n  allowed_networks:\n    ssh:\n      - 203.0.113.0/24     # Office network\n      - 198.51.100.42/32   # Admin workstation\n    api:\n      - 0.0.0.0/0          # Public API access\n\n# Master Nodes Configuration\n# Example true x3 GEO replicated masters pool\nmasters_pool:\n  instance_type: cpx22     # 2 vCPU, 4GB RAM, 80 GB SSD\n  instance_count: 3        # HA configuration\n  locations:               # Could use yaml anchor *locations\n    - fsn1                 # x1 Falkenstein\n    - hel1                 # x1 Helsinki\n    - nbg1                 # x1 Nuremberg\n\n# Example x3 masters but same DC replicated pool\n# Using placement groups for hardware separation | runs on a different physical host\nmasters_pool:\n  instance_type: cpx22     # 2 vCPU, 4GB RAM, 80 GB SSD\n  instance_count: 3        # HA configuration with placement groups\n  locations:               # Could use yaml anchor *locations\n    - fsn1                 # Falkenstein\n  placement_group:         # Use placement groups for single DC multiple nodes physical host separation\n      name: master\n      type: spread\n      labels:\n        - key: group\n          value: master\n        - key: ha\n          value: \"true\"\n\n# Worker Nodes Configuration\n# Example true x3 GEO replicated pool\n# worker_node_pools:\n#   - name: workers\n#     instance_type: cx42    # 16 vCPU, 32GB RAM\n#     instance_count: 6      # Distributed across locations\n#     locations:             # Multi-location support (NEW) | Could use yaml anchor *locations\n#       - fsn1               # x2 nodes in Falkenstein\n#       - hel1               # x2 nodes in Helsinki  \n#       - nbg1               # x2 nodes in Nuremberg\n\n# Simple worker pools\n- name: varnish\n  instance_type: cpx22\n  instance_count: 3\n  locations: *locations\n  # Additional custom Hetzner metadata labels can be used with all resources\n  # Default Hetzner metadata cluster labels always applied\n  hetzner:\n    labels:\n      - key: cluster_id\n        value: \"123456\"\n      - key: environment\n        value: production\n  # Additional custom Kubernetes labels can be used with all resources\n  # Default Kubernetes cluster labels always applied\n  kubernetes:\n      labels:\n        - key: []\n          value: []\n      taints:\n        - key: []\n          value: []\n          effect: NoSchedule\n \n- name: nginx\n  instance_type: cpx22\n  instance_count: 3\n  locations: *locations\n\n- name: php\n  instance_type: cpx22\n  locations: *locations\n  autoscaling:                       # Cluster Autoscaler enabled configuration\n    enabled: true\n    min_instances: 1\n    max_instances: 3\n\n# Global Load Balancer (Optional)\n# Simplify web traffic routing to internal node pool\n# SSL termination\nload_balancer:\n  name: *cluster_name\n  enabled: true\n  target_pools: [\"varnish\"]          # Internal varnish node pool label\n  use_private_ip: true\n  attach_to_network: true\n  type: \"lb11\"\n  locations: *locations\n  algorithm:\n    type: \"round_robin\"\n  services:\n    - protocol: \"https\"              # Create HTTPS service configuration with DNS Zone and SSL certificate below required\n      listen_port: 443\n      destination_port: 80\n      proxyprotocol: false\n      health_check:\n        protocol: \"http\"\n        port: 80\n        interval: 15\n        timeout: 10\n        retries: 3\n        http:\n          domain: *domain\n          path: \"/health_check.php\"\n          status_codes: [\"2??\", \"3??\"]\n          tls: false\n\n# DNS Zone Management (Optional, required for SSL certificate)\ndns_zone:\n  enabled: true\n  name: example.com        # Will use domain if not specified | use yaml anchor\n  ttl: 3600                # DNS TTL in seconds\n\n# SSL Certificate (Optional, requires DNS zone)\nssl_certificate:\n  enabled: true\n  name: example.com        # Certificate name (will use domain if not specified)\n  domain: example.com      # Domain for certificate (will use domain if not specified)\n  preserve: true           # Preserve certificate during cluster deletion to avoid Let's Encrypt rate limits\n  # Creates certificate for example.com and *.example.com\n  # Certificate is automatically validated via DNS and attached to HTTPS services\n  # Preconfigure DNS:\n  # _acme-challenge.example.com\tIN\tNS\thydrogen.ns.hetzner.com.\n  # _acme-challenge.example.com\tIN\tNS\thelium.ns.hetzner.com.\n  # _acme-challenge.example.com\tIN\tNS\toxygen.ns.hetzner.com.\n  # Setting preserve:true reuses existing certificates instead of requesting new ones\n\n# Addons installation required for cluster functionality\naddons:\n  metrics_server:\n    enabled: true\n  csi_driver:\n    enabled: true\n    manifest_url: \"https://raw.githubusercontent.com/hetznercloud/csi-driver/v2.20.0/deploy/kubernetes/hcloud-csi.yml\"\n  cluster_autoscaler:\n    enabled: true\n    manifest_url: \"https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/hetzner/examples/cluster-autoscaler-run-on-master.yaml\"\n    container_image_tag: \"v1.35.0\"\n    scan_interval: \"10s\"                        \n    scale_down_delay_after_add: \"10m\"\n    scale_down_delay_after_delete: \"10s\"\n    scale_down_delay_after_failure: \"3m\"\n    max_node_provision_time: \"5m\"\n  cloud_controller_manager:\n    enabled: true\n    manifest_url: \"https://github.com/hetznercloud/hcloud-cloud-controller-manager/releases/download/v1.30.1/ccm-networks.yaml\"\n  system_upgrade_controller:\n    enabled: true\n    deployment_manifest_url: \"https://github.com/rancher/system-upgrade-controller/releases/download/v0.19.0/system-upgrade-controller.yaml\"\n    crd_manifest_url: \"https://github.com/rancher/system-upgrade-controller/releases/download/v0.19.0/crd.yaml\"\n  embedded_registry_mirror:\n    enabled: true\n\n# Custom commands to configure private networking via NAT Gateway\n# Configuration before k3s setup\nadditional_pre_k3s_commands:\n  - apt autoremove -y hc-utils\n  - apt purge -y hc-utils\n  - echo \"auto enp7s0\" \u003e /etc/network/interfaces\n  - echo \"iface enp7s0 inet dhcp\" \u003e\u003e /etc/network/interfaces\n  - echo \"    post-up ip route add default via 10.0.0.1\"  \u003e\u003e /etc/network/interfaces\n  - echo \"[Resolve]\" \u003e /etc/systemd/resolved.conf\n  - echo \"DNS=185.12.64.2 185.12.64.1\" \u003e\u003e /etc/systemd/resolved.conf\n  - ifdown enp7s0 2\u003e/dev/null || true\n  - ifup enp7s0 2\u003e/dev/null || true\n  - systemctl enable --now resolvconf\n  - echo \"nameserver 185.12.64.2\" \u003e\u003e /etc/resolvconf/resolv.conf.d/head\n  - echo \"nameserver 185.12.64.1\" \u003e\u003e /etc/resolvconf/resolv.conf.d/head\n  - resolvconf --enable-updates\n  - resolvconf -u\n\n# Custom commands to install additional packages\n# Configuration after k3s setup\nadditional_post_k3s_commands:\n  - apt autoremove -y\n  - apt update\n  - apt install -y syslog-ng ufw\n\n```\n\n**Create the Cluster:**\n\n```bash\nkuberaptor create --config cluster.yaml\n```\n\n### 3. Create Multi-Location Regional Cluster (High Availability)\n\nFor maximum fault tolerance and geographic redundancy, distribute your cluster across multiple data centers:\n\n```yaml\n# regional-cluster.yaml\ncluster_name: eu-regional\nhetzner_token: ${HCLOUD_TOKEN}\nkubeconfig_path: ~/.kube/eu-regional-config\nk3s_version: v1.31.4+k3s1\n\n# Distribute masters across 3 regions for HA\nmasters_pool:\n  instance_type: cpx22        # 4 vCPU, 8GB RAM\n  instance_count: 3           # Odd number for etcd quorum\n  locations:\n    - fsn1                    # Falkenstein, Germany\n    - hel1                    # Helsinki, Finland\n    - nbg1                    # Nuremberg, Germany\n\n# Worker pools distributed across the same regions\nworker_node_pools:\n  # Static worker pool across all locations\n  - name: general\n    instance_type: cx42       # 16 vCPU, 32GB RAM\n    instance_count: 6         # 2 per location (round-robin)\n    locations:\n      - fsn1\n      - hel1\n      - nbg1\n  \n  # Autoscaling pool with regional node groups\n  - name: autoscaling\n    instance_type: cpx32      # 8 vCPU, 16GB RAM\n    locations:\n      - fsn1                  # Creates node group: eu-regional-autoscaling-fsn1\n      - hel1                  # Creates node group: eu-regional-autoscaling-hel1\n      - nbg1                  # Creates node group: eu-regional-autoscaling-nbg1\n    autoscaling:\n      enabled: true\n      min_instances: 3        # Distributed: 1 per location\n      max_instances: 9        # Distributed: 3 per location\n\nnetworking:\n  private_network:\n    enabled: true\n    subnet: \"10.0.0.0/16\"\n    nat_gateway:\n      enabled: true\n      instance_type: cpx11      # Smallest instance for NAT\n      locations:                # Multi-location NAT gateways\n        - fsn1                  # Creates: eu-regional-nat-gateway-fsn1\n        - hel1                  # Creates: eu-regional-nat-gateway-hel1\n        - nbg1                  # Creates: eu-regional-nat-gateway-nbg1\n  allowed_networks:\n    ssh:\n      - \"0.0.0.0/0\"\n    api:\n      - \"0.0.0.0/0\"\n\n# Global load balancer (optional)\nload_balancer:\n  enabled: true\n  type: lb11\n  locations:                    # Multi-location load balancers (one per location)\n    - fsn1                      # Creates: eu-regional-global-lb-fsn1\n    - hel1                      # Creates: eu-regional-global-lb-hel1\n    - nbg1                      # Creates: eu-regional-global-lb-nbg1\n  services:\n    - protocol: https\n      listen_port: 443\n      destination_port: 80\n\n# Cluster autoscaler (runs on masters with node affinity)\naddons:\n  cluster_autoscaler:\n    enabled: true\n```\n\n#### Multi-Location Features\n\n1. **Master Distribution**: Masters are distributed round-robin across locations\n   - `instance_count: 3` with 3 locations → 1 master per location\n   - Ensures etcd quorum survives datacenter failure\n\n2. **Worker Distribution**: Workers distributed using same pattern  \n   - `instance_count: 6` with 3 locations → 2 workers per location\n   - Load balanced across regions\n\n3. **Autoscaler Node Groups**: One node group per location\n   - `eu-regional-autoscaling-fsn1` (manages fsn1 nodes)\n   - `eu-regional-autoscaling-hel1` (manages hel1 nodes)\n   - `eu-regional-autoscaling-nbg1` (manages nbg1 nodes)\n   - Each scales independently based on regional demand\n\n4. **NAT Gateways**: One NAT gateway per location\n   - `eu-regional-nat-gateway-fsn1` (provides internet for fsn1)\n   - `eu-regional-nat-gateway-hel1` (provides internet for hel1)\n   - `eu-regional-nat-gateway-nbg1` (provides internet for nbg1)\n   - Each location has dedicated outbound internet access\n\n5. **Global Load Balancers**: One load balancer per location for true regional autonomy\n   - `eu-regional-global-lb-fsn1` (serves workers in fsn1)\n   - `eu-regional-global-lb-hel1` (serves workers in hel1)\n   - `eu-regional-global-lb-nbg1` (serves workers in nbg1)\n   - Each load balancer only targets workers in its location\n   - Provides independent entry points per region (full cluster replica)\n   \n6. **API Load Balancers**: One API load balancer per location for complete regional autonomy\n   - `eu-regional-api-lb-fsn1` (serves masters in fsn1)\n   - `eu-regional-api-lb-hel1` (serves masters in hel1)\n   - `eu-regional-api-lb-nbg1` (serves masters in nbg1)\n   - Each API LB only targets masters in its location\n   - Each region has independent Kubernetes API access (full cluster x3 replica)\n   - Default kubeconfig points to first region's API LB\n   - All API LB IPs included in TLS SANs for certificate validity\n\n4. **Master Affinity**: Autoscaler preferentially runs on master nodes\n   - Implements \"master A → autoscaler A → pool A\" pattern\n   - Reduces cross-region latency for scaling operations\n\n### 4. High Availability with Placement Groups\n\nPlacement Groups ensure that each cloud server runs on a different physical host server, minimizing the impact of single host failures. This is essential for production workloads requiring high availability.\n\n**Example Configuration:**\n\n```yaml\ncluster_name: ha-cluster\nhetzner_token: YOUR_HETZNER_API_TOKEN_HERE\nimage: ubuntu-22.04\nk3s_version: v1.28.5+k3s1\n\nnetworking:\n  ssh:\n    public_key_path: ~/.ssh/id_rsa.pub\n    private_key_path: ~/.ssh/id_rsa\n\n# Masters with placement group\nmasters_pool:\n  instance_type: cpx22\n  instance_count: 3\n  locations:\n    - fsn1\n    - nbg1\n    - hel1\n  \n  placement_group:\n    name: master\n    type: spread\n    labels:\n      - key: tier\n        value: control-plane\n\n# Worker pools with placement groups\nworker_node_pools:\n  - name: database\n    instance_type: cpx22\n    instance_count: 3\n    locations:\n      - fsn1\n      - nbg1\n    \n    placement_group:\n      name: database\n      type: spread\n      labels:\n        - key: workload\n          value: database\n        - key: ha\n          value: \"true\"\n    \n    kubernetes:\n      labels:\n        - key: workload\n          value: database\n      taints:\n        - key: dedicated\n          value: database\n          effect: NoSchedule\n```\n\n**Placement Group Features:**\n- Each server in a placement group runs on a different physical host\n- Reduces single point of failure risk for stateful workloads\n- Configurable per node pool (masters and workers)\n- Optional Hetzner labels for organization and tracking\n- Automatically cleaned up during cluster deletion\n\n### Verify and Use the Cluster\n\n```bash\n# Check cluster\nkubectl get all -A\nkubectl get nodes\n```\n\n### Execute Commands on Cluster Nodes (Parallel)\n\n```bash\n# Run a command on all nodes\nkuberaptor run --config cluster.yaml --command \"uptime\"\n\n# Check disk usage on all nodes\nkuberaptor run --config cluster.yaml --command \"df -h /\"\n\n# Execute a script file\nkuberaptor run --config cluster.yaml --script ./maintenance.sh\n\n# Run on specific node only\nkuberaptor run --config cluster.yaml \\\n  --command \"systemctl status k3s\" \\\n  --instance mykubic-master-fsn1-1\n```\n\n### Upgrade Cluster to New K3s Version\n\n```bash\n# List available K3s versions\nkuberaptor releases\n\n# Upgrade cluster\nkuberaptor upgrade --config cluster.yaml \\\n  --new-k3s-version v1.32.1+k3s1\n\n# Verify upgrade\nkubectl get nodes\n\n# Force upgrade (skip confirmation)\nkuberaptor upgrade --config cluster.yaml \\\n  --new-k3s-version v1.32.1+k3s1 \\\n  --force\n```\n\n### Delete Cluster and Clean Up Resources\n\n```bash\nkuberaptor delete --config cluster.yaml\n\n# Force delete without confirmation\nkuberaptor delete --config cluster.yaml --force\n```\n\n### List Available K3s Releases\n\n```bash\nkuberaptor releases\n\n# Show more releases\nkuberaptor releases --limit 50\n```\n\n### Check Cluster Budget\n\n```bash\n# Calculate estimated monthly cost\nkuberaptor budget --config cluster.yaml\n\n# Shows breakdown of costs for:\n# - Master nodes\n# - Worker nodes  \n# - Load balancers\n# - Network resources\n# - Total estimated monthly cost\n```\n\n---\n\n## ⚖️ Features compared with other tools:\n\n|                      | Kuberaptor           | Hetzner-k3s                          | k3s or Managed Service         |\n| -------------------- | -------------------- | ------------------------------------ | ------------------------------ |\n| **Base**             |                      |                                      |                                |\n| Idempotence          | ✅ Available [default]  | ✅ Available [default]                  | ➖ Setup required                 |\n| Deployment           | ✅ 2-3 minutes          | ✅ 2-3 minutes                          | ➖ 30+ minutes                    |\n| Dependencies         | ✅ CLI tool only        | ➖ kubectl/helm required                | ➖ Account, Terraform, python     |\n| Data privacy         | ✅ Full control         | ✅ Full control                         | ➖ Third-party access             |\n| Credential exposure  | ✅ Local PC, Autoscaler | ➖ Local PC, Autoscaler, Nodes, Scripts | ➖ API tokens, third party access |\n| Learning curve       | ✅ Low                  | ✅ Low                                  | ➖ Medium-High                    |\n| Monthly cost         | ✅ Infrastructure only  | ✅ Infrastructure only                  | ➖ Platform fees                  |\n|                      |                      |                                      |                                |\n| **Features**         |                      |                                      |                                |\n| Private Networking   | ✅ Available [default]  | ✅ Available                       | ➖ Setup required                 |\n| NAT Gateway          | ✅ Available [default]  | ➖ Setup required                       | ➖ Setup required                 |\n| DNS Zone             | ✅ Available [default]  | ➖ Setup required                       | ➖ Setup required                 |\n| SSL Certificate      | ✅ Available [default]  | ➖ Setup required                       | ➖ Setup required                 |\n| Global Load Balancer | ✅ Available [default]  | ➖ Setup required                       | ➖ Setup required                 |\n| HTTPS Service Target | ✅ Available [default]  | ➖ Setup required                       | ➖ Setup required                 |\n| Labels               | ✅ Hetzner + Kubernetes | ➖ Kubernetes                           | ➖ Setup required                 |\n| True Hetzner GEO HA  | ✅ Available [default]  | ➖ Setup required                       | ➖ Setup required                 |\n| Placement Groups     | ✅ Available [default]  | ✅ Available                            | ➖ Setup required                 |\n|                      |                      |                                      |                                |\n| **Options**          |                      |                                      |                                |\n| Budget estimator     | ✅ Available [default]  | ➖ Not Available                        | ➖ Setup required                 |\n| Config generator     | ✅ Available [default]  | ➖ Not Available                        | ➖ Setup required                 |\n\n\n## 🏆 Key Technical Features\n\n### Performance\n\n- **Ultra-fast startup**: Binary starts in less than 10ms\n- **Quick builds**: Full rebuild completes in approximately 30 seconds\n- **Efficient parallel execution**: Goroutine-based concurrency for 10x faster node operations\n- **Static binary**: Single executable with zero runtime dependencies\n- **Cross-platform**: Native compilation for Linux, macOS, Windows (AMD64 and ARM64)\n\n### Code Quality\n\n- **Type-safe**: Compile-time type checking throughout the codebase\n- **Comprehensive error handling**: Explicit error returns with context\n- **Concurrent operations**: Goroutines and channels for parallel processing\n- **Resource management**: Proper cleanup with defer statements\n- **Modular architecture**: Clear separation of concerns across packages\n- **Well-tested**: Unit tests and integration tests included\n\n### Developer Experience\n\n- **Fast iteration**: Quick builds enable rapid development\n- **Rich IDE support**: Full support in VSCode, GoLand, and other Go IDEs\n- **Official SDK**: Uses Hetzner Cloud Go SDK v2 for reliability\n- **Standard tooling**: Makefile for common operations\n- **Documentation**: Comprehensive code comments and external docs\n\n### Operations\n\n- **Single binary deployment**: No complex installation or dependencies\n- **Zero runtime requirements**: Fully static linking\n- **Container-friendly**: Can run in distroless/scratch containers\n- **Debug support**: Built-in pprof profiling and race detection\n- **Cross-compilation**: Build for any platform from any platform\n\n---\n\n## 🔧 Building and Installation\n\n### Prerequisites\n\n- **Go**: Version 1.24.0 or later\n- **Make**: For using the Makefile\n- **Git**: For cloning the repository\n\n### Quick Start\n\n```bash\n# Clone the repository\ngit clone https://github.com/magenx/kuberaptor.git\ncd kuberaptor\n\n# Build the binary\nmake build\n\n# Binary will be available at: dist/kuberaptor\n\n# Run the binary\nkuberaptor --help\n```\n\n### Build Commands\n\n```bash\n# Build for current platform\nmake build\n\n# Build for specific platforms\nmake build-linux        # Linux AMD64\nmake build-linux-arm    # Linux ARM64\nmake build-darwin-arm   # macOS ARM64 (Apple Silicon)\n\n# Build for all platforms\nmake build-all\n\n# Run tests\nmake test\n\n# Run tests with coverage\nmake coverage\n\n# Install to /usr/local/bin\nsudo make install\n\n# Clean build artifacts\nmake clean\n\n# Download and tidy dependencies\nmake deps\n\n# Format code\nmake fmt\n\n# Run linter (requires golangci-lint)\nmake lint\n```\n\n### Development Workflow\n\n```bash\n# 1. Make code changes\nvim internal/cluster/create_enhanced.go\n\n# 2. Format code\nmake fmt\n\n# 3. Run tests\nmake test\n\n# 4. Build binary\nmake build\n\n# 5. Test the binary\nkuberaptor create --config test-cluster.yaml\n\n# 6. Run linter before committing\nmake lint\n```\n\n### Installation from Binary\n\n```bash\n# Download the latest release for your platform\n# Example for Linux AMD64:\nwget https://github.com/magenx/kuberaptor/releases/latest/download/kuberaptor-linux-amd64\n\n# Make it executable\nchmod +x kuberaptor-linux-amd64\n\n# Move to PATH\nsudo mv kuberaptor-linux-amd64 /usr/local/bin/kuberaptor\n\n# Verify installation\nkuberaptor version\n```\n\n---\n\n## 📦 Dependencies\n\n### Core Dependencies\n\n| Package | Purpose | Version |\n|---------|---------|---------|\n| `github.com/hetznercloud/hcloud-go/v2` | Official Hetzner Cloud SDK | v2.36.0 |\n| `github.com/spf13/cobra` | CLI framework and commands | v1.10.2 |\n| `gopkg.in/yaml.v3` | YAML parsing and serialization | v3.0.1 |\n| `golang.org/x/crypto/ssh` | SSH client implementation | v0.48.0 |\n\n### Testing Dependencies\n\n- Go's built-in testing framework\n- Race detector for concurrency issues\n- Coverage tools for test coverage analysis\n\nAll dependencies are statically linked into the binary.\n\n---\n\n## 🌐 Website\n\nThis project includes a management UI built with React, Vite, TypeScript, and Tailwind CSS.\n\n**Live Site:** [https://www.kuberaptor.com/](https://www.kuberaptor.com/)\n\nThe website source code is in the `ui/` folder and is automatically deployed to GitHub Pages when changes are pushed to the main branch.\n\n---\n## Screenshots\n![Kuberaptor cli screenshot](.github/assets/termtop.png)\n![Kuberaptor cli screenshot](.github/assets/termbottom.png)\n\n**Project:** Kuberaptor  \n**Version:** dev  \n**Status:** WIP  \n**Language:** Go 1.24  \n**License:** MIT  \n**Last Updated:** 2026-02-22\n\n\n\n---\n\u003e idea https://github.com/vitobotta/hetzner-k3s\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmagenx%2Fkuberaptor","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fmagenx%2Fkuberaptor","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmagenx%2Fkuberaptor/lists"}