Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/gsaslis/home-on-k8s
A privacy-focused smart home setup, on a k3s-powered raspberry pi cluster.
https://github.com/gsaslis/home-on-k8s
ansible home-assistant k3s mosquitto nextcloud photoprism pi-hole raspberry-pi smart-home unifi-controller
Last synced: 10 days ago
JSON representation
A privacy-focused smart home setup, on a k3s-powered raspberry pi cluster.
- Host: GitHub
- URL: https://github.com/gsaslis/home-on-k8s
- Owner: gsaslis
- Created: 2022-02-24T22:16:31.000Z (over 2 years ago)
- Default Branch: main
- Last Pushed: 2023-05-16T06:43:55.000Z (over 1 year ago)
- Last Synced: 2023-05-16T07:27:35.969Z (over 1 year ago)
- Topics: ansible, home-assistant, k3s, mosquitto, nextcloud, photoprism, pi-hole, raspberry-pi, smart-home, unifi-controller
- Language: Makefile
- Homepage:
- Size: 1.08 MB
- Stars: 15
- Watchers: 3
- Forks: 1
- Open Issues: 1
-
Metadata Files:
- Readme: README.asciidoc
Awesome Lists containing this project
README
= Smart Home on K8s
:toc: preambleA Smart Home running on Kubernetes (for High Availability ... and fun! 🎉 )
== Hardware
I am using:
* Raspberry Pi 4 (4gb)
* Raspberry Pi 4 (8gb)
* Raspberry Pi 3 (1gb)
** (more raspberries coming, when I can finally find more `pi` to buy)
* 3 x https://www.amazon.com/SanDisk-Endurance-microSDXC-Adapter-Monitoring/dp/B07NY23WBG[SanDisk High Endurance 128GB SD cards]== Operating system
=== Raspbian, with passwordless SSH access
1. Burn Raspbian Bullseye on an SD card, as per: https://www.raspberrypi.com/software/
+
[cols="1,1"]
|===
a| image::images/raspberry_pi_imager_options.png[Options,200]
a| image::images/raspberry_pi_os_lite.png[64bit Lite,200]
|===1. Create an empty file called `ssh` on the root folder of the SD card. This enables SSH, which is crucial if you don't have your raspberry pi connected to a screen. Default username: `pi`, default pass: `raspberry`.
1. Run `sudo raspi-config` to connect your PI to the Wi-Fi, if not connected by Ethernet. Also change the default password.
1. Enable passwordless ssh, by copying your public key to the raspberry pi's `~/.ssh/authorized_keys` file. For convenience, you can also use `ssh-copy-id -i $PATH_TO_KEY pi@$PI_IP_ADDRESS`
1. Ensure you have added the key to your agent with `ssh-add -K $PATH_TO_KEY`. Now try to connect via SSH and ensure you don't get a password prompt.== Preparations
The first thing you'll need for the cluster setup is to install Ansible on your workstation. Your workstation will connect to each of the raspberry pi nodes via SSH and run all the commands in the Ansible playbook, as you sip your beverage of choice relaxing on your chair.You can install Ansible following the guide here: https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html[Ansible Installation Guide]
You'll also need git cli in order to checkout the project and run k3s-install (see below).
In case it isn't already installed (try `git --version`), something like `sudo apt install git-all` will do the job._Finally, please have in mind that you need to change the hostname(s) of the pods where all the service will be installed. So before any make command just be sure that you've edited the appropriate file replacing my hostname(s) with yours._
== Install K3s (lightweight Kubernetes)
I was looking for an Ansible playbook that supports an HA setup of K3s, so that the applications we deploy can actually tolerate hardware failures of the individual Raspberry Pi nodes.
The biggest choice regarding the HA setup is whether you will use an external database, or you will go with the newer option of an https://rancher.com/docs/k3s/latest/en/installation/ha-embedded/[embedded etcd cluster].
Considering I didn't have any other HA database at home, I went for the simplest - in terms of initial setup - option: the embedded etcd cluster.
The `k3s-install` Makefile target will point you to the ansible playbook used.
Before running `make k3s-install` you'll need to install ansible role for k3s. More info https://github.com/PyratLabs/ansible-role-k3s[here]
You'll also need to alter `k3s/ansible/inventory.yaml` so it depict your hardware inventory.
//@gsaslis please help here with more info
=== Access your shiny new cluster
After the playbook is finished and everything has gone well, you need to access the cluster.
You can ssh into one of your PIs and copy the contents of `/etc/rancher/k3s/k3s.yaml`.
Paste the contents of this file into e.g. `~/.kube/k8spi` and use that as your KUBECONFIG file. Make sure to edit `127.0.0.1` or `localhost` in that file and replace it with the IP of your `control_plane` node.
[source,bash=]
----
export KUBECONFIG=~/.kube/k8spi
----Then, you should see:
[cols="1,1"]
|===
a| image::images/playbook_success.png[Options,400]
a| image::images/k3s_cluster_nodes.png[64bit Lite,400]
|===Once you have everything ready, it's time to start deploying stuff on our [.line-through]#Kubernetes# K3s cluster!!
But we need a bit more setup first: a Dashboard, to have a web-based UI, and the NFS provisioner, so that we can have some persistent storage.
=== Setup Persistent Storage
For persistent storage, I have:
1. Enable NFS on the Synology NAS: `Control Panel -> File Services -> NFS`
1. Created a Shared Folder (called ... "Kubernetes"). All PersistentVolumes will be created in this folder.
1. Allowed access to this shared folder to be accessed by the cluster IPs (see screenshot below)
+
image::images/synology_nfs_setup.png[Options,600]1. Used the https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/tree/master/deploy[NFS subdir provisioner], which I have included in this repo.
1. Edit `nfs-subdir/standard/deployment.yaml` and add your own `NFS_SERVER` and `NFS_PATH` values
1. Install with `make nfs-install`.Please note that there is also another direcotry called `fast` inside `nfs-subdir`. You can safely ignore it as it is used only if you need a second NFS storage let's say based on faster SSD disks. If you need it you can uncomment the respective line in the Makefile and make the changes in the `deployment.yaml` file.
=== Kubernetes dashboard
`make dashboard`
//todo document kubectl patch on traefik, with `--serversTransport.insecureSkipVerify=true` that allows to expose k8s dashboard
To log in to the dashboard, you need to create a token:
`kubectl --namespace kubernetes-dashboard create token admin-user`
Use the token to log in, or add an entry to your `KUBECONFIG`:
```
token:
```== Install Smart Home applications
With all that out of the way...
It's finally(!!) time to start making our home smarter!!! 🎉 🥳 🚀
**BUT BEFORE YOU START!!!**
=== DNS Config
In my manifests, you will find that I am using `*.k3s.yorgos.net.gr` domains, to expose services running within the K3s cluster.
In order for that to work, you need to add `A` records for your corresponding wildcard domain. I have 3 k3s cluster nodes, so I added 3 `A` records - one for each **[.underline]#local#** IP address.
image::images/dns_records.png[Options,600]
=== Pi-hole
Pi-hole is a network-based ad-blocking piece of software. It is also a custom DNS server (more on this in a bit).
I deploy pi-hole because I can't run uBlock Origin on every browser of every device (not all of them are controlled by me 😅), so having a way to block ads on the network level is 🎉!
It uses https://en.wikipedia.org/wiki/DNS_sinkhole[DNS sinkholing] and blocklists as a way of stopping internet ads, malware, malvertising, etc.
Apart from the ad-blocking functionality, running a local DNS server helps me with one more thing: DNS resolution on Internet-blocked devices.
As you may have noticed, I use public DNS entries to map to local IP addresses: e.g. all my apps are deployed under `*.k3s.yorgos.net.gr`.
You will need to create a secret in the `pihole` namespace (create it if it doesn't exist):
[source,yaml]
----
---
apiVersion: v1
kind: Secret
metadata:
name: pihole-secret
type: Opaque
stringData:
password: some-super-secure-pass-for-your-pihole-web-ui
----Once the secret has been created, you can:
`make pihole`
=== Mosquitto
I need my smart home automation software to access my smart devices (shelly/sonoff plugs, switches, etc. etc.) - for example, all my switches need to connect to the MQTT server (mosquitto). BUT - **I block internet access on all these devices on my router**! So, without a local DNS server, I would need to use a single IP address for my MQTT server and somehow ensure that a load balancer runs in High-Availability behind that IP address.
Instead of doing that, I can add 2 `A` records for `mosquitto.k3s.yorgos.net.gr`, for IP addresses `192.168.100.180` and `192.168.100.181` and my smart home setup will survive outages of one of the two nodes !! (I only run a single instance of mosquitto, but kubernetes will ensure it always runs on one of these two nodes and this way the clients will always find and connect to it!)
To install mosquitto itself, use:
`make mosquitto`
=== Home-Assistant
`make home-assistant`
After Home Assistant is installed, you will need to add the following section to your `configuration.yaml` (which probably lives in your NAS persistent volume folder):
```yaml
# Uncomment this if you are using SSL/TLS, running in Docker container, etc.
# http:
# base_url: example.duckdns.org:8123
http:
server_host: 0.0.0.0
# optional ip_ban_enabled: true
# optional login_attempts_threshold: 5
use_x_forwarded_for: true
trusted_proxies:
# Pod CIDR
- 10.42.0.0/16
# Node CIDR
- 192.168.100.0/24
```=== Nextcloud
For nextcloud, you will need to create 2 kubernetes secrets:
[source,yaml]
----
---
apiVersion: v1
kind: Secret
metadata:
name: nextcloud-postgres-secrets
labels:
app: nextcloud-postgres
type: Opaque
stringData:
POSTGRES_PASSWORD: "your postgres password"
nextcloud-db-user: nextcloud
nextcloud-db-password: "your nextcloud user db password"
---
apiVersion: v1
kind: Secret
metadata:
name: nextcloud-secrets
type: Opaque
stringData:
nextcloud-db-user: nextcloud
nextcloud-db-password: "your nextcloud user db password"
nextcloud-user: admin
nextcloud-password: "some super secure pass for your admin user"----
After you have created these 2 secrets (e.g. with `kubectl apply` ), in the `nextcloud` namespace, you can go ahead and run:
`make nextcloud`
=== Photoprism
Photoprism has become my "Google Photos" / "Apple Photos" **privacy-friendly** alternative solution.
First, you will need to create 2 kubernetes secrets. One for photoprism and one for mariadb - the backing database:
[source,yaml]
----
---
apiVersion: v1
kind: Secret
metadata:
name: photoprism-mariadb-secrets
namespace: databases
stringData:
MARIADB_ROOT_PASSWORD: "some super clever root password"
MARIADB_PASSWORD: "yet another password you shouldn't upload to github"
---
apiVersion: v1
kind: Secret
metadata:
name: photoprism-secrets
namespace: photoprism
stringData:
PHOTOPRISM_DATABASE_DRIVER: "mysql" # use MariaDB 10.5+ or MySQL 8+ instead of SQLite for improved performance
PHOTOPRISM_DATABASE_SERVER: "photoprism-db:3306" # MariaDB or MySQL database server (hostname:port)
PHOTOPRISM_DATABASE_NAME: "photoprism" # MariaDB or MySQL database schema name
PHOTOPRISM_DATABASE_USER: "photoprism" # MariaDB or MySQL database user name
PHOTOPRISM_DATABASE_PASSWORD: "this should match with MARIADB_PASSWORD" # MariaDB or MySQL database user password
PHOTOPRISM_ADMIN_PASSWORD: "bla bla bla"
----After you have created these 2 secrets (e.g. with `kubectl apply` ), you can go ahead and run:
`make mariadb photoprism`
=== Unifi Controller
After first buying the Unifi equipment, I ran the Unifi Controller on my laptop. But that wasn't convenient because there were times I wanted to check my network configuration when I wasn't at my laptop.
Then I moved it to my NAS, running on Docker Compose. But that wasn't great because the Unifi controller is quite demanding in terms of resources and (I think!!) it caused my NAS to hang a couple of times < --- *not* good!!
So, finally, I decided to migrate it to the k3s cluster, for greater reliability.
To deploy it, just:
`make unifi-controller`
=== Shairport-Sync (Airplay)
Shairport-Sync is a great Airplay 1 emulator. I use it reliably (for a number of years), to turn an old set of roof speakers in my living room, into a "smart speaker", that I can stream music to **over WiFi** (not bluetooth).
In terms of deploying shairport-sync, the thing to be aware of is that **we clearly only want it to run on a single raspberry** of our raspberry pi cluster - the one that is actually connected to the speakers!
We can achieve that with the following combination:
Add a label to the node:
[source,bash]
----
$ kubectl label nodes node3-k3s app=audio
node/node3-k3s labeled
----Instruct the app to be deployed on the node with that label:
[source,yaml]
----
nodeSelector:
app: audio
----=== Mycroft Voice Assistant
link:mycroft/README.asciidoc[Mycroft-specific instructions]
=== Mimic3 Text-to-Speech engine
i.e. don't let your mycroft sound like a robot !
1. Deploy mimic3 on k3s:
```bash
make mimic3
```2. Once, Mimic3 is up and running, it is now time to point mycroft to it:
Use the following in your `/home/pi/.config/mycroft.conf` (you might need to move it to this path from `/home/pi/.mycroft/mycroft.conf`):
```bash
"tts": {
"marytts":{
"url": "https://mimic3.k3s.yorgos.net.gr/",
# "voice":"en_UK/apope_low"
# switched to the below, after my daughter's request
"voice":"en_US/hifi-tts_low#92"
},
"module":"marytts"
}
```* reboot
=== Traefik improvements
Run `make traefik` to:
* expose Traefik dashboard for help with debugging
* switches from Deployment to DaemonSet, to ensure traefik runs on all Raspberries
* provides option to enable access log on traefik.**IMPORTANT**: the traefik dashboard will be available at `traefik./dashboard/`. Do **NOT** forget the `/dashboard/`, **including** the final
=== What about Storage downtime?
There are times when your Network-Attached Storage (NAS) storage will
go down.Whether because you want to upgrade your NAS software, or because you want
to change hard disks (and that requires downtime), or because of a power failure that exceeds your Uninterruptible Power Supply (UPS) limits.In those cases, it is very useful to have a way to gracefully terminate
any internal services that rely on Kubernetes persistent volumes provided
by your NAS.In this repo, you can use the `nas-shutdown` and `nas-restart` make targets.
=== HTTPS / TLS (with cert-manager and LetsEncrypt)
Choosing LetsEncrypt for non-self-signed (and non-paid-for) certificates
comes with one caveat: we cannot use the `http01` solver, because we don't
have any public endpoints that letsencrypt can verify the text on. We *can*, however, use the `dns01` solver - with an additional dependency
on Cloudflare (where my `yorgos.net.gr` domain is set up). cert-manager
will use a secret API key to contact Cloudflare, and instruct it to set up
the domains accordingly, in order to prove ownership to LetsEncrypt and
allow the certificates to be issued correctly.With that in mind, all the resources required are in the `cert-manager/`
folder.We use:
* the helm chart to deploy `cert-manager`
* a letsencrypt *staging* ClusterIssuer, for our tests. LetsEncrypt applies
rate limiting and it's easy to go over during initial tests. Use
`letsencrypt-staging` for all your tests and then switch your ingress to
`letsencrypt-prod` once everything is fully working.
* a traefik `middleware` to always redirect to HTTPS
* a k8s *secret* to hold the Cloudflare "Global API Key" - do NOT use "API tokens" or the "Origin CA Key", like the one below:[source,yaml]
----
---
apiVersion: v1
kind: Secret
metadata:
name: cloudflare-api-key-secret
type: Opaque
stringData:
# Cloudflare "Global API Key" - do NOT use "API tokens" or the "Origin CA Key"
api-key:----
After you have added this secret (with the correct value) in the `cert-manager` folder, you should just need to invoke:
`make cert-manager`