{"id":28160039,"url":"https://github.com/aryprogrammer/doks-k8s","last_synced_at":"2025-10-27T06:16:30.683Z","repository":{"id":285752018,"uuid":"958540887","full_name":"ARYPROGRAMMER/doks-k8s","owner":"ARYPROGRAMMER","description":null,"archived":false,"fork":false,"pushed_at":"2025-05-01T14:49:44.000Z","size":12,"stargazers_count":1,"open_issues_count":0,"forks_count":0,"subscribers_count":1,"default_branch":"master","last_synced_at":"2025-05-01T15:41:28.420Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Shell","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/ARYPROGRAMMER.png","metadata":{"files":{"readme":"readme.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2025-04-01T11:14:27.000Z","updated_at":"2025-05-01T14:49:48.000Z","dependencies_parsed_at":"2025-04-02T14:21:47.675Z","dependency_job_id":"512414c4-b5f5-4194-b0c8-51bfa40fe213","html_url":"https://github.com/ARYPROGRAMMER/doks-k8s","commit_stats":null,"previous_names":["aryprogrammer/doks-k8s"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ARYPROGRAMMER%2Fdoks-k8s","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ARYPROGRAMMER%2Fdoks-k8s/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ARYPROGRAMMER%2Fdoks-k8s/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ARYPROGRAMMER%2Fdoks-k8s/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/ARYPROGRAMMER","download_url":"https://codeload.github.com/ARYPROGRAMMER/doks-k8s/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254319677,"owners_count":22051076,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2025-05-15T10:12:21.209Z","updated_at":"2025-10-27T06:16:30.621Z","avatar_url":"https://github.com/ARYPROGRAMMER.png","language":"Shell","readme":"Here’s a **detailed breakdown** of each method, along with **alternative approaches** based on different requirements.  \n\n---\n\n## **1. Transfer DigitalOcean Kubernetes Cluster Ownership**\nThis is the **simplest** and most direct method if the client wants **full control** over the cluster, including billing.\n\n### **Steps:**\n1. **Invite the Client to Your DigitalOcean Team**  \n   - Go to **DigitalOcean Console → Teams → Manage Team**.\n   - Click **Invite Members** and enter the client's email.\n   - Assign them the role of **Owner**.\n   \n2. **Transfer Cluster Ownership**  \n   - Once they join, go to **Kubernetes** in the DigitalOcean dashboard.\n   - Click on the cluster → **Settings** → **Transfer**.\n   - Select the client’s account as the new owner.\n   \n3. **Client Accepts the Transfer**  \n   - The client will receive an email.  \n   - Once they **accept**, they become the new owner, and you can remove yourself.\n\n### **Alternative Approach:**\n- If you don’t want to transfer your **entire team**, you can create a **new DigitalOcean team**, set up the cluster there, and then **invite the client**.\n- The client can then **remove you from the team** once they confirm ownership.\n\n🔹 **Best For:** When the client wants **full control and billing responsibility**.\n\n---\n\n## **2. Export and Provide Cluster Configuration (`kubeconfig`)**\nIf the client wants **access** to the cluster but does **not need ownership**, you can just share the **kubeconfig file**.\n\n### **Steps:**\n1. **Get the kubeconfig File**  \n   Run the following command in your terminal:\n   ```sh\n   doctl kubernetes cluster kubeconfig save \u003ccluster-name\u003e\n   ```\n   - This saves the cluster configuration to `~/.kube/config`.\n\n2. **Send the Configuration to the Client**  \n   - The file is located at `~/.kube/config`.  \n   - Copy and **share** it with the client.  \n   - They can use it to connect using `kubectl`:\n     ```sh\n     kubectl get nodes\n     ```\n\n### **Alternative Approach:**\n- Instead of giving full access, you can create a **read-only service account** for the client, restricting their permissions using Role-Based Access Control (RBAC).  \n- Example YAML to create a **limited access role**:\n  ```yaml\n  apiVersion: rbac.authorization.k8s.io/v1\n  kind: Role\n  metadata:\n    namespace: default\n    name: read-only-role\n  rules:\n    - apiGroups: [\"\"]\n      resources: [\"pods\", \"services\"]\n      verbs: [\"get\", \"list\"]\n  ```\n\n🔹 **Best For:** When the client **only needs access, not ownership**.\n\n---\n\n## **3. Provide Terraform or Helm Scripts to Recreate the Cluster**\nIf the client wants to **deploy the cluster themselves**, you can give them the **Terraform or Helm files** used to create it.\n\n### **Steps:**\n1. **Export the Terraform Configuration (if applicable)**  \n   If you used Terraform, share the `.tf` file:\n   ```hcl\n   resource \"digitalocean_kubernetes_cluster\" \"example\" {\n     name    = \"client-k8s\"\n     region  = \"nyc1\"\n     version = \"1.27.2-do.0\"\n\n     node_pool {\n       name       = \"worker-pool\"\n       size       = \"s-2vcpu-4gb\"\n       node_count = 3\n     }\n   }\n   ```\n   - The client runs:\n     ```sh\n     terraform apply\n     ```\n     to create the cluster.\n\n2. **Share Helm Charts (if applicable)**  \n   If you deployed workloads using **Helm**, package and send the Helm chart:\n   ```sh\n   helm package mychart/\n   ```\n   - The client installs it using:\n     ```sh\n     helm install my-release mychart.tgz\n     ```\n\n### **Alternative Approach:**\n- If you used **Pulumi** or **Ansible**, you can share the respective scripts instead.\n\n🔹 **Best For:** When the client wants to **create the cluster on their own**.\n\n---\n\n## **4. Backup and Restore to Client’s Cluster**\nIf the client has **already set up a K8s cluster** and just wants **your workloads**, you can export the existing state.\n\n### **Steps:**\n1. **Backup All Manifests from the Current Cluster**\n   ```sh\n   kubectl get all -o yaml \u003e backup.yaml\n   ```\n\n2. **Send the File to the Client**\n\n3. **Client Restores the Configuration**\n   - They apply the backup in their own cluster:\n     ```sh\n     kubectl apply -f backup.yaml\n     ```\n\n### **Alternative Approach:**\n- Instead of using `kubectl get all`, use:\n  ```sh\n  kubectl get deployments,services,configmaps,secrets -o yaml \u003e deployment-backup.yaml\n  ```\n  - This excludes unnecessary resources.\n\n🔹 **Best For:** When the client already has a **running cluster**.\n\n---\n\n## **5. Migrate Workloads Using Velero (For Full Backup)**\nIf the client wants an **exact replica**, use **Velero**, which is specifically designed for K8s backups.\n\n### **Steps:**\n1. **Install Velero on Your Cluster**\n   ```sh\n   velero install --provider aws --plugins velero/velero-plugin-for-aws:v1.4.0 \\\n       --bucket \u003cbackup-bucket\u003e --backup-location-config region=\u003cregion\u003e\n   ```\n\n2. **Backup Everything**\n   ```sh\n   velero backup create full-backup --include-namespaces=*\n   ```\n\n3. **Share the Backup with the Client**\n   - The client restores it using:\n     ```sh\n     velero restore create --from-backup full-backup\n     ```\n\n### **Alternative Approach:**\n- Instead of backing up everything, you can **only backup specific namespaces**:\n  ```sh\n  velero backup create my-backup --include-namespaces=my-namespace\n  ```\n\n🔹 **Best For:** When you need a **full backup and restore** across different clusters.\n\n---\n\n## **Which Method Should You Choose?**\n| **Scenario** | **Method** |\n|-------------|------------|\n| Client wants full ownership, including billing | **Transfer DigitalOcean ownership** |\n| Client just needs access to manage the cluster | **Share `kubeconfig` file** |\n| Client wants to recreate the cluster on their account | **Provide Terraform/Helm scripts** |\n| Client has a cluster and wants your workloads | **Backup \u0026 Restore with `kubectl`** |\n| Client wants an exact copy of your cluster | **Use Velero for migration** |\n\n---\n\n💡 **Final Recommendation:**  \nIf the client is new to Kubernetes and doesn’t want to manage infrastructure, the **best option** is **transferring the cluster ownership**. If they are experienced, you can provide Terraform files so they can recreate it in their own account.\n\nLet me know which method you prefer, and I can guide you with step-by-step commands! 🚀","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Faryprogrammer%2Fdoks-k8s","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Faryprogrammer%2Fdoks-k8s","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Faryprogrammer%2Fdoks-k8s/lists"}