Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/sleleu/inception-of-things

A Kubernetes cluster development project using k3s, k3d and vagrant
https://github.com/sleleu/inception-of-things

42 argocd devops k3d k3s kubernetes vagrant

Last synced: 29 days ago
JSON representation

A Kubernetes cluster development project using k3s, k3d and vagrant

Awesome Lists containing this project

README

        

# Table of Contents

- [About the Project](#about-the-project)
- [Part 1](#part-1)
- [Part 2](#part-2)
- [Part 3](#part-3)
- [Installation](#installation)
- [Resources](#resources)

# About the project

Inception-of-Things is a project of the 42 school aimed at deepening our knowledge through the use of technologies related
to the Kubernetes sphere, such as **k3s**, **k3d**, **vagrant**, or **argoCD**.

This project is divided into 3 parts, separated by the folders **p1**, **p2**, and **p3**. All of these parts were completed inside a Virtual Machine.

This project was carried out with [hugorclt](https://github.com/hugorclt/iot), and [Oryzon00](https://github.com/Oryzon00) 🤙

## Part 1
The first part introduces us to the use of k3s and vagrant, with the goal of creating a cluster containing a server node and a worker node.

It is therefore necessary to create two VMs from a Vagrantfile:
```ruby
Vagrant.configure("2") do |config|

config.vm.box = "base"
config.vm.provider :libvirt do |lv|
lv.memory = "1024"
lv.cpus = "1"
end

config.vm.define "sleleuS" do |server|
server.vm.box = "debian/bookworm64"
server.vm.hostname = "sleleuS"
server.vm.network "private_network", ip: "192.168.56.110"
server.vm.provision "shell", path: "scripts/server.sh"
end

config.vm.define "sleleuSW" do |worker|
worker.vm.box = "debian/bookworm64"
worker.vm.hostname = "sleleuSW"
worker.vm.network "private_network", ip: "192.168.56.111"
worker.vm.provision "shell", path: "scripts/worker.sh"
end
end
```

Each virtual machine will launch its own provisioning script, allowing the installation of k3s and connection to the cluster.

**On the server side**: It will be necessary to inform the Vagrant user about the location of the configuration file that k3s generates during its installation in order to use kubectl
without any issues: `export KUBECONFIG=/etc/rancher/k3s/k3s.yaml' >> /home/vagrant/.profile`, as well as change the permissions of this file so that it can be accessed since the
provisioning is done from the root user: `K3S_KUBECONFIG_MODE=644`

Finally, since the flannel service listens by default on the eth0 network interface, it will be necessary to specify the eth1 interface to have a correct internal IP: `INSTALL_K3S_EXEC='--flannel-iface=eth1'`

**On the worker side**: Creating the worker is even simpler, and only requires installing and launching k3s, indicating in the env variable `K3S_URL` the IP and port of the server,
and the token generated by the server in the env variable `K3S_TOKEN`. By default, this token is generated upon the server's creation in the directory: `/var/lib/rancher/k3s/server/token`

It is therefore enough to wait for this token to be created, and then copy it to the `/vagrant` directory. This is a directory shared by default between the different virtual machines launched in a Vagrant box.

If everything is configured correctly, it is possible to verify from the server via SSH if the worker is correctly connected to the server with the command `kubectl get nodes -o wide`:

![Screenshot from 2024-03-15 18-24-45](https://github.com/Sleleu/Inception-of-Things/assets/93100775/6ed098da-82cc-4097-a5a5-0b8c53920b10)

## Part 2
This second part introduces us to the deployment of Kubernetes applications with k3s. From a single virtual machine with k3s installed in server mode, the goal is to deploy 3 web applications following this diagram:

![Screenshot from 2024-03-15 16-58-12](https://github.com/Sleleu/Inception-of-Things/assets/93100775/8855b210-02a1-48c5-a9df-18fac75944cc)

Accessible from the IP `192.168.56.110`, it should be possible to access the different applications depending on the HOST used, app1.com will allow access to app1, app2.com to app2, etc. If the HOST is not defined, the default access will be set to application 3.

Application 2 must have 3 replicas. The number of replicas will define the number of pods of the application, each pod being a clone of the site, allowing in practice to support a larger amount of traffic.
It is also possible with Kubernetes to automatically change these replicas based on the site's load using the command `kubectl autoscale`.

To accomplish this deployment, it is first necessary to add the different hosts to the `/etc/hosts/` file in order to resolve the DNS between the host and the server's IP:

```bash
for i in {1..3}; do
line="192.168.56.110 app$i.com"

if ! grep -q "$line" /etc/hosts; then
echo "$line" >> /etc/hosts
echo "Added HOST $line"
else
echo "$line already exists"
fi
done
```

Then, for deployment, Kubernetes will use several configuration files, including **ingress**, **service**, and **deployment** types.

In the order of access from the user to the application, a request will be processed by the ingress. The ingress acts as a reverse proxy, defining how our pod will be accessible.

The service will indicate the available ports inside our images.

Finally, the deployment file is responsible for specifying from which image the application will be built, but also its number of replicas, potential environment variables, etc.

For example, a basic [ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) will take this form:

```yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx-example
rules:
- http:
paths:
- path: /testpath
pathType: Prefix
backend:
service:
name: test
port:
number: 80
```
It's worth noting that with the use of configuration files, the imperative gives way to the descriptive.
The configuration files establish an additional layer of abstraction, allowing Kubernetes to establish the services
we describe to it by itself. This simplifies versioning as well as the updating of services, rather than managing them oneself from the CLI.

Once the service is launched, access to the applications will be possible through a simple curl, or from a web browser:

![Screenshot from 2024-03-15 19-09-28](https://github.com/Sleleu/Inception-of-Things/assets/93100775/03143f77-30f7-47a1-aacf-dbfd557388b2)

For this project, we used the image from Paulbower available on DockerHub, [hello-kubernetes](https://hub.docker.com/layers/paulbouwer/hello-kubernetes/1.10.1/images/sha256-a53650885c21b4f6f284380db6e67ccbeb2159a23d4515913dba31384a665357?context=explore).

By connecting via SSH to the VM, we can verify that the services are working as expected with `kubectl get all`:

![Screenshot from 2024-03-15 19-08-05](https://github.com/Sleleu/Inception-of-Things/assets/93100775/00994336-990a-46cd-a87a-26de558d76ef)

All seems to be fine !

## Part 3

This time, moving away from Vagrant, we're setting up a small infrastructure taking this form:

![Screenshot from 2024-03-15 18-07-07](https://github.com/Sleleu/Inception-of-Things/assets/93100775/b1ddfc97-a36f-4cb1-b08f-33c84538dc94)

We need to **set up a k3d cluster featuring an ArgoCD service and an application**. ArgoCD will manage the deployment of this application,
whose configuration file is located in a public GitHub repository, thus automatically updating the application present in the
Kubernetes cluster with every change to the configuration file.

K3d is a wrapper for k3s that allows launching clusters in a Docker container. Therefore, it's necessary to install Docker in addition to
k3d to use it.

Once the tools are installed, it will be necessary to create a namespace for argocd and a dev namespace containing
the application to be deployed from ArgoCD:

```bash
kubectl create namespace argocd
kubectl create namespace dev
```

It's possible to create the services, controllers, and deployment of ArgoCD from its configuration file available on GitHub:

```bash
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
```

A configuration file can also be provided to ArgoCD in order to setup an application in a declarative way:

```yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: wil
spec:
project: default
source:
repoURL: https://github.com/sleleu/iot-dev-sleleu.git
targetRevision: HEAD
path: config
destination:
server: https://kubernetes.default.svc
namespace: dev
```

This configuration file points to the public repo [iot-dev-sleleu](https://github.com/Sleleu/iot-dev-sleleu), which will contain a DockerHub image proposed by the school's topic: https://hub.docker.com/r/wil42/playground.
This image has two versions, allowing testing the automatic update of the application after a commit on the GitHub repo.

Access to ArgoCD has been configured via an ingress, to access it from the host `argocd-server.com:8888`, and the application in access by path:

![Screenshot from 2024-03-15 18-44-31](https://github.com/Sleleu/Inception-of-Things/assets/93100775/07488e01-e495-4873-afc2-5d23823c5f05)

Once the application is created, the admin panel will request a login `admin`, as well as the password automatically generated upon the creation of the ArgoCD service in the cluster, which can be obtained with
this command: `kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d`:

![Screenshot from 2024-03-15 18-46-18](https://github.com/Sleleu/Inception-of-Things/assets/93100775/c115953b-d6d6-4a39-8cf4-cda9a85325c5)

A panel allowing to see the state of the service will be available:

![Screenshot from 2024-03-15 18-50-57](https://github.com/Sleleu/Inception-of-Things/assets/93100775/261279dc-cdf3-4340-a982-e9849242eb29)

Switching the image from v1 to v2, ArgoCD will retrieve the latest commit, and update the application accessible from `localhost:8888` in real-time:

```bash
# Before commit
➜ p3 git:(main) ✗ curl localhost:8888
{"status":"ok", "message": "v1"}

# After commit
➜ p3 git:(main) ✗ curl localhost:8888
{"status":"ok", "message": "v2"}
```

and that concludes our part 3!

# Installation

This project has a Makefile in each part, just run the `make` command to set up the architecture.
However, some tools must be installed on your machine (or VM) to launch the project.
For parts 1 and 2, it is necessary to install Vagrant:

```
curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -
sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
sudo apt update && sudo apt install vagrant
```
as well as the libvirt provider: https://vagrant-libvirt.github.io/vagrant-libvirt/

An installation script named `install.sh` is included in part 3 to assist in installing the rest of the tools.

# Resources
### General
- https://www.youtube.com/playlist?list=PLn6POgpklwWqfzaosSgX2XEKpse5VY2v5
- https://www.youtube.com/watch?v=s_o8dwzRlu4
- https://blog.stephane-robert.info/docs/conteneurs/orchestrateurs/kubernetes/introduction/
- https://cours.brosseau.ovh/tp/ci/kubernetes/deploy-container-in-kubernetes.html
- https://blog.stephane-robert.info/docs/conteneurs/orchestrateurs/k3s/introduction/

### Vagrant
- https://blog.stephane-robert.info/docs/infra-as-code/provisionnement/vagrant/introduction/
- https://blog.filador.fr/a-la-decouverte-de-k3s/
- https://lpenaud.github.io/vagrant.html

### Ingress
- https://kubernetes.io/docs/concepts/services-networking/ingress/
- https://www.youtube.com/watch?v=4tDu39Ks0g0&list=PLn6POgpklwWqfzaosSgX2XEKpse5VY2v5&index=42
- https://www.suse.com/c/rancher_blog/deploy-an-ingress-controller-on-k3s/

### ArgoCD and k3d
- https://www.youtube.com/watch?v=JLrR9RV9AFA
- https://www.sokube.io/blog/gitops-on-a-laptop-with-k3d-and-argocd