https://github.com/rgl/sidero-vagrant
Vagrant Environment for a playing with Sidero.
https://github.com/rgl/sidero-vagrant
bare-metal ipmi kubernetes sidero talos
Last synced: 7 months ago
JSON representation
Vagrant Environment for a playing with Sidero.
- Host: GitHub
- URL: https://github.com/rgl/sidero-vagrant
- Owner: rgl
- Created: 2021-07-16T18:21:07.000Z (over 4 years ago)
- Default Branch: main
- Last Pushed: 2021-10-19T19:16:49.000Z (almost 4 years ago)
- Last Synced: 2024-12-31T11:06:13.255Z (10 months ago)
- Topics: bare-metal, ipmi, kubernetes, sidero, talos
- Language: Shell
- Homepage:
- Size: 73.2 KB
- Stars: 5
- Watchers: 3
- Forks: 2
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
This is a [Vagrant](https://www.vagrantup.com/) Environment for a playing with [Sidero](https://www.sidero.dev).
For playing with [Talos](https://www.talos.dev) see the [rgl/talos-vagrant](https://github.com/rgl/talos-vagrant) repository.
# Usage (Ubuntu 20.04)
Install docker, vagrant, vagrant-libvirt, and the [Ubuntu Base Box](https://github.com/rgl/ubuntu-vagrant).
If you want to connect to the external physical network, you must configure your host network as described in [rgl/ansible-collection-tp-link-easy-smart-switch](https://github.com/rgl/ansible-collection-tp-link-easy-smart-switch#take-ownership-procedure) (e.g. have the `br-rpi` linux bridge) and set `CONFIG_PANDORA_BRIDGE_NAME` in the `Vagrantfile`.
This environment sometimes hits the [GitHub rate limits (at the time of writing, these were 60 unauthenticated requests per hour)](https://docs.github.com/en/rest/overview/resources-in-the-rest-api#rate-limiting), as such, you might want to export the `GITHUB_USERNAME/GITHUB_TOKEN` environment variables before running `vagrant` to have an higher (5000 requests per hour).
**NB** This token is also saved in the `.netrc` file inside the VMs.
Bring up the `pandora` virtual machine:
```bash
vagrant up --provider=libvirt --no-destroy-on-error pandora
```Enter the `pandora` virtual machine and watch the progress:
```bash
vagrant ssh pandora
sudo -i
watch kubectl get servers,machines,clusters
```In another shell, bring up the example cluster virtual machines:
```bash
vagrant up --provider=libvirt --no-destroy-on-error
```Access the example cluster:
```bash
vagrant ssh pandora
sudo -i
kubectl get talosconfig \
-l cluster.x-k8s.io/cluster-name=example \
-o jsonpath='{.items[0].status.talosConfig}' \
>example-talosconfig.yaml
first_control_plane_ip="$(cat /vagrant/shared/machines.json | jq -r '.[] | select(.role == "controlplane") | .ip' | head -1)"
talosctl --talosconfig example-talosconfig.yaml config endpoints $first_control_plane_ip
talosctl --talosconfig example-talosconfig.yaml config nodes $first_control_plane_ip
# NB the following will only work after the example cluster has a working
# control plane (e.g. after the cp1 node is ready).
talosctl --talosconfig example-talosconfig.yaml kubeconfig example-kubeconfig.yaml
cp example-*.yaml /vagrant/shared
kubectl --kubeconfig example-kubeconfig.yaml get nodes -o wide
```Access kubernetes with k9s:
```bash
vagrant ssh pandora
sudo -i
k9s # management cluster.
k9s --kubeconfig example-kubeconfig.yaml # example cluster.
```## Network Packet Capture
You can easily capture and see traffic from the host with the `wireshark.sh`
script, e.g., to capture the traffic from the `eth1` interface:```bash
./wireshark.sh pandora eth1
```# Notes
* only the `amd64` architecture is currently supported by sidero.
* see `kubectl get environment default -o yaml`# Troubleshoot
* Sidero
* `clusterctl config repositories`
* `kubectl get crd servers.metal.sidero.dev -o yaml`
* `kubectl get clusters`
* `kubectl get servers`
* `kubectl get serverclasses`
* `kubectl get machines`
* `kubectl get taloscontrolplane`
* `kubectl get environment default -o yaml`
* `kubectl get ns`
* `kubectl -n sidero-system get pods`
* `kubectl -n sidero-system logs -l app=sidero`
* `kubectl -n capi-webhook-system get deployments`
* `kubectl -n capi-webhook-system get pods`
* `kubectl -n capi-webhook-system logs -l control-plane=controller-manager -c manager`
* `kubectl -n sidero-system logs -l control-plane=caps-controller-manager -c manager`
* `kubectl -n cabpt-system logs deployment/cabpt-controller-manager -c manager`
* Talos
* [Troubleshooting Control Plane](https://www.talos.dev/docs/v0.11/guides/troubleshooting-control-plane/)
* `talosctl -n cp1 dashboard`
* `talosctl -n cp1 logs controller-runtime`
* `talosctl -n cp1 logs kubelet`
* `talosctl -n cp1 disks`
* `talosctl -n cp1 get resourcedefinitions`
* `talosctl -n cp1 get machineconfigs -o yaml`
* `talosctl -n cp1 get staticpods -o yaml`
* `talosctl -n cp1 get staticpodstatus`
* `talosctl -n cp1 get manifests`
* `talosctl -n cp1 get services`
* `talosctl -n cp1 get addresses`
* `talosctl -n cp1 list /system`
* `talosctl -n cp1 list /var`
* `talosctl -n cp1 read /proc/cmdline`
* Kubernetes
* `kubectl get events --all-namespaces --watch`
* `kubectl --namespace kube-system get events --watch`