https://github.com/axeII/home-ops
A repository for HomeOps where I perform Infrastructure as Code (IaC) and GitOps practices.
https://github.com/axeII/home-ops
cert-manager flux k8s-at-home kube-vip kubernetes sops talos
Last synced: 5 months ago
JSON representation
A repository for HomeOps where I perform Infrastructure as Code (IaC) and GitOps practices.
- Host: GitHub
- URL: https://github.com/axeII/home-ops
- Owner: axeII
- License: wtfpl
- Created: 2021-07-06T17:45:25.000Z (over 4 years ago)
- Default Branch: main
- Last Pushed: 2025-05-10T20:11:28.000Z (5 months ago)
- Last Synced: 2025-05-10T20:29:17.244Z (5 months ago)
- Topics: cert-manager, flux, k8s-at-home, kube-vip, kubernetes, sops, talos
- Language: Shell
- Homepage:
- Size: 4.83 MB
- Stars: 46
- Watchers: 1
- Forks: 1
- Open Issues: 23
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Home Operations
### HomeOps repo managed by k8s :wheel_of_dharma:
_... automated via [Flux](https://github.com/fluxcd/flux2), [Renovate](https://github.com/renovatebot/renovate) and [GitHub Actions](https://github.com/features/actions)_ :robot:
[](https://discord.gg/home-operations)
[](https://talos.dev)
[](https://kubernetes.io)
[](https://fluxcd.io)
[](https://github.com/axeII/home-ops/actions/workflows/renovate.yaml)[](https://github.com/axeII/home-ops/blob/main/README.md#file_cabinet-hardware)
[](https://github.com/pre-commit/pre-commit)
[](https://github.com/axeII/home-ops/blob/main/README.md)[](https://github.com/kashalls/kromgo)
[](https://github.com/kashalls/kromgo)
[](https://github.com/kashalls/kromgo)
[](https://github.com/kashalls/kromgo)
[](https://github.com/kashalls/kromgo)
[](https://github.com/kashalls/kromgo)
[](https://github.com/kashalls/kromgo)---
## 📖 Overview
Here, I perform DevOps best practices but at home. Check out the hardware section where I describe what sort of hardware I am using. Thanks to Ansible, it's very easy for me to manage my home infrastructure and the cluster. I try to adhere to Infrastructure as Code (IaC) and GitOps practices using tools like [Kubernetes](https://github.com/kubernetes/kubernetes), [Flux](https://github.com/fluxcd/flux2), [Renovate](https://github.com/renovatebot/renovate) and [GitHub Actions](https://github.com/features/actions).

## ⛵ Kubernetes
There is a template over at [onedr0p/cluster-template](https://github.com/onedr0p/cluster-template) if you wanted to try and follow along with some of the practices I use here.
### Installation
My cluster has been migrated from a k3s/Longhorn setup to Talos with Rook Ceph. First of all, Talos is fantastic—I highly recommend it to anyone seeking a lightweight Kubernetes distribution. Currently, I’m running one node with the e1000 driver, while the second node lacks a reliable primary disk, so the cluster is operating in single-controller mode with two worker nodes. In the future, I plan to upgrade the setup to include three controller nodes.
The main reason I switched to Rook Ceph is that Longhorn felt less stable and is still under active development. I decided it was time to give Rook Ceph a try.
### Core Components
- [cert-manager](https://cert-manager.io/) - SSL certificates - with Cloudflare DNS challenge
- [cillium](https://github.com/cilium/cilium) - CNI for k8s
- [cloudflared](https://github.com/cloudflare/cloudflared): Enables Cloudflare secure access to my ingresses.
- [external-dns](https://github.com/kubernetes-sigs/external-dns): Automatically syncs ingress DNS records to a DNS provider.
- [external-secrets](https://github.com/external-secrets/external-secrets): Managed Kubernetes secrets using [1Password Connect](https://github.com/1Password/connect).
- [flux](https://toolkit.fluxcd.io/) - GitOps tool for deploying manifests from the `cluster` directory
- [ingress-nginx](https://github.com/kubernetes/ingress-nginx): Kubernetes ingress controller using NGINX as a reverse proxy and load balancer.
- [k8s_gateway](https://github.com/ori-edge/k8s_gateway) - DNS resolver for all types of external Kubernetes resources
- [kube-vip](https://kube-vip.io) - layer 2 load balancer for the Kubernetes control plane
- [rook-ceph](https://rook.io) - storage class provider for data persistence
- [reflector](https://github.com/emberstack/kubernetes-reflector) - mirror configmaps or secrets to other Kubernetes namespaces
- [reloader](https://github.com/stakater/Reloader) - restart pods when Kubernetes `configmap` or `secret` changes
- [sops](https://github.com/getsops/sops): Managed secrets for Kubernetes which are committed to Git.
- [spegel](https://github.com/spegel-org/spegel): Stateless cluster local OCI registry mirror.### ☸ GitOps
[Flux](https://github.com/fluxcd/flux2) watches my [kubernetes](./kubernetes) folder (see Directories below) and makes the changes to my cluster based on the YAML manifests.
The way Flux works for me here is it will recursively search the [kubernetes/apps](./kubernetes/apps) folder until it finds the most top level `kustomization.yaml` per directory and then apply all the resources listed in it. That aforementioned `kustomization.yaml` will generally only have a namespace resource and one or many Flux kustomizations. Those Flux kustomizations will generally have a `HelmRelease` or other resources related to the application underneath it which will be applied.
[Renovate](https://github.com/renovatebot/renovate) watches my **entire** repository looking for dependency updates, when they are found a PR is automatically created. When some PRs are merged [Flux](https://github.com/fluxcd/flux2) applies the changes to my cluster.
### Directories
This Git repository contains the following directories under [kubernetes](./kubernetes).
```sh
📁 kubernetes # Kubernetes cluster defined as code
├─📁 bootstrap # Flux installation
├─📁 flux # Main Flux configuration of repository
└─📁 apps # Apps deployed into my cluster grouped by namespace (see below)
```### :file_cabinet: Hardware
My homelab runs on the following hardware (all k8s nodes are running on ubuntu 20.04):
| Device | OS Disk Size | Data Disk Size | Ram | Purpose |
| ------------------------------ | ---------------- | -------------- | ---- | ---------------------------------------- |
| k8s-2 (Intel NUC) | 1TB SSD SATA | 250GB NVMe | 32GB | Talos node |
| k8s-1 (Udoo Bolt V8 AMD Ryzen) | eMMC 30GB | 250GB NVMe | 32GB | Talos node |
| k8s-0 (VM) | 250GB NVMe SCSi | 250GB NVMe | 32GB | Talos node with Nvidia GPU and NVMe Disk |
| TRUENAS | ZFS raidz 1 40TB | 4x10TB HDD | 64GB | Storage |
| Unifi UDM Pro | SSD 14GB | HDD 1TB | 4GB | Router and security Gateway |
| Unifi Switch 16 PoE | N/A | N/A | N/A | Switch with 802.3at PoE+ ports |
| Database Server | 20GB | N/A | 2GB | Database |
| Offsite Machine | 60 GB | 8TB | 8GB | Backup offsite vm |### 📰 Blog post
Feel free to checkout my blog [axell.dev](https://axell.dev) which is also [open source](https://github.com/axeII/my-blog)!
I also have made a blog post about HW, what were my choices... which ones were good and which ones were bad. [Click here](https://axell.dev/favorite/my-home-lab/).## 🤝 Gratitude and Thanks
I am proud to be a member of the home operations (previously k8s-at-home) community! I received a lot of help and inspiration for my Kubernetes cluster from this community which helped a lot. Thanks! :heart:
If you are interested in running your own k8s cluster at home, I highly recommend you to check out the [k8s-at-home](https://k8s-at-home.com) website.
Be sure to check out [kubesearch.dev](https://kubesearch.dev) for ideas on how to deploy applications or get ideas on what you may deploy.
## 🔏 License
See [LINCENSE](https://raw.githubusercontent.com/axeII/home-ops/refs/heads/main/LICENCE).