https://github.com/portainer/k8s
How to deploy Portainer inside a Kubernetes environment.
https://github.com/portainer/k8s
kubernetes portainer
Last synced: 6 months ago
JSON representation
How to deploy Portainer inside a Kubernetes environment.
- Host: GitHub
- URL: https://github.com/portainer/k8s
- Owner: portainer
- License: mit
- Created: 2020-08-14T01:59:09.000Z (about 5 years ago)
- Default Branch: master
- Last Pushed: 2025-03-25T03:04:00.000Z (7 months ago)
- Last Synced: 2025-03-28T08:17:24.432Z (6 months ago)
- Topics: kubernetes, portainer
- Language: Shell
- Homepage:
- Size: 257 KB
- Stars: 91
- Watchers: 9
- Forks: 59
- Open Issues: 31
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
This repo contains helm and YAML for deploying Portainer into a Kubernetes environment. Follow the applicable instructions for your edition / deployment methodology below:
- [Deploying with Helm](#deploying-with-helm)
- [Community Edition](#community-edition)
- [Using NodePort on a local/remote cluster](#using-nodeport-on-a-localremote-cluster)
- [Using a cloud provider's loadbalancer](#using-a-cloud-providers-loadbalancer)
- [Using ClusterIP with an ingress](#using-clusterip-with-an-ingress)
- [Enterprise Edition](#enterprise-edition)
- [Using NodePort on a local/remote cluster](#using-nodeport-on-a-localremote-cluster-1)
- [Using a cloud provider's loadbalancer](#using-a-cloud-providers-loadbalancer-1)
- [Using ClusterIP with an ingress](#using-clusterip-with-an-ingress-1)
- [Deploying with manifests](#deploying-with-manifests)
- [Community Edition](#community-edition-1)
- [Using NodePort on a local/remote cluster](#using-nodeport-on-a-localremote-cluster-2)
- [Using a cloud provider's loadbalancer](#using-a-cloud-providers-loadbalancer-2)
- [Enterprise Edition](#enterprise-edition-1)
- [Using NodePort on a local/remote cluster](#using-nodeport-on-a-localremote-cluster-3)
- [Using a cloud provider's loadbalancer](#using-a-cloud-providers-loadbalancer-3)
- [Note re persisting data](#note-re-persisting-data)# Deploying with Helm
Ensure you're using at least helm v3.2, which [includes support](https://github.com/helm/helm/pull/7648) for the `--create-namespace` argument.
Install the repository:
```
helm repo add portainer https://portainer.github.io/k8s/
helm repo update
```## Community Edition
Install the helm chart:
### Using NodePort on a local/remote cluster
```
helm install --create-namespace -n portainer portainer portainer/portainer
```### Using a cloud provider's loadbalancer
```
helm install --create-namespace -n portainer portainer portainer/portainer \
--set service.type=LoadBalancer
```### Using ClusterIP with an ingress
```
helm install --create-namespace -n portainer portainer portainer/portainer \
--set service.type=ClusterIP
```For advanced helm customization, see the [chart README](/charts/portainer/README.md)
## Enterprise Edition
### Using NodePort on a local/remote cluster
```
helm install --create-namespace -n portainer portainer portainer/portainer \
--set enterpriseEdition.enabled=true
```### Using a cloud provider's loadbalancer
```
helm install --create-namespace -n portainer portainer portainer/portainer \
--set enterpriseEdition.enabled=true \
--set service.type=LoadBalancer
```### Using ClusterIP with an ingress
```
helm install --create-namespace -n portainer portainer portainer/portainer \
--set enterpriseEdition.enabled=true \
--set service.type=ClusterIP
```For advanced helm customization, see the [chart README](/charts/portainer/README.md)
# Deploying with manifests
If you're not using helm, you can install Portainer using manifests directly, as follows
## Community Edition
### Using NodePort on a local/remote cluster
```
kubectl apply -f https://raw.githubusercontent.com/portainer/k8s/master/deploy/manifests/portainer/portainer.yaml
```### Using a cloud provider's loadbalancer
```
kubectl apply -f https://raw.githubusercontent.com/portainer/k8s/master/deploy/manifests/portainer/portainer-lb.yaml
```## Enterprise Edition
### Using NodePort on a local/remote cluster
```
kubectl apply- f https://raw.githubusercontent.com/portainer/k8s/master/deploy/manifests/portainer/portainer-ee.yaml
```### Using a cloud provider's loadbalancer
```
kubectl apply -f https://raw.githubusercontent.com/portainer/k8s/master/deploy/manifests/portainer/portainer-lb-ee.yaml
```# Note re persisting data
The charts/manifests will create a persistent volume for storing Portainer data, using the default StorageClass.
In some Kubernetes clusters (microk8s), the default Storage Class simply creates hostPath volumes, which are not explicitly tied to a particular node. In a multi-node cluster, this can create an issue when the pod is terminated and rescheduled on a different node, "leaving" all the persistent data behind and starting the pod with an "empty" volume.
While this behaviour is inherently a limitation of using hostPath volumes, a suitable workaround is to use add a nodeSelector to the deployment, which effectively "pins" the portainer pod to a particular node.
The nodeSelector can be added in the following ways:
1. Edit your own values.yaml and set the value of nodeSelector like this:
```
nodeSelector:
kubernetes.io/hostname:
```2. Explicictly set the target node when deploying/updating the helm chart on the CLI, by including `--set nodeSelector.kubernetes.io/hostname=`
3. If you've deployed Portainer via manifests, without Helm, run the following one-liner to "patch" the deployment, forcing the pod to always be scheduled on the node it's currently running on:```
kubectl patch deployments -n portainer portainer -p '{"spec": {"template": {"spec": {"nodeSelector": {"kubernetes.io/hostname": "'$(kubectl get pods -n portainer -o jsonpath='{ ..nodeName }')'"}}}}}' || (echo Failed to identify current node of portainer pod; exit 1)
```