Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/elotl/kip
Virtual-kubelet provider running pods in cloud instances
https://github.com/elotl/kip
Last synced: about 14 hours ago
JSON representation
Virtual-kubelet provider running pods in cloud instances
- Host: GitHub
- URL: https://github.com/elotl/kip
- Owner: elotl
- License: apache-2.0
- Created: 2020-01-09T23:45:59.000Z (almost 5 years ago)
- Default Branch: master
- Last Pushed: 2023-02-25T07:01:39.000Z (over 1 year ago)
- Last Synced: 2024-04-18T19:36:33.202Z (7 months ago)
- Language: Go
- Size: 14.2 MB
- Stars: 219
- Watchers: 9
- Forks: 14
- Open Issues: 34
-
Metadata Files:
- Readme: README.md
- License: LICENSE
- Security: docs/security-groups.md
Awesome Lists containing this project
README
![KIP](KipOpenSource-logo.png "KIP")
# Kip, the Kubernetes Cloud Instance ProviderKip is a [Virtual Kubelet](https://github.com/virtual-kubelet/virtual-kubelet) provider that allows a Kubernetes cluster to transparently launch pods onto their own cloud instances. The kip pod is run on a cluster and will create a virtual Kubernetes node in the cluster. When a pod is scheduled onto the Virtual Kubelet, Kip starts a right-sized cloud instance for the pod’s workload and dispatches the pod onto the instance. When the pod is finished running, the cloud instance is terminated. We call these cloud instances “cells”.
When workloads run on Kip, your cluster size naturally scales with the cluster workload, pods are strongly isolated from each other and the user is freed from managing worker nodes and strategically packing pods onto nodes. This results in lower cloud costs, improved security and simpler operational overhead.
#### Table of Contents
* [Installation](#installation)
+ [Option 1: Create a Minimal Cluster](#installation-option-1-create-a-minimal-k8s-cluster)
+ [Option 2: Using an Existing Cluster](#installation-option-2-using-an-existing-cluster)
* [Running Pods on Kip](#running-pods-on-kip)
* [Uninstall](#uninstall)
* [Current Status](#current-status)
* [FAQ](#faq)
* [How it Works](#how-it-works)## Requirements
To build Kip you need:
- golang 1.14+ (older versions may work)
- deepcopy-gen: go install k8s.io/code-generator/cmd/deepcopy-gen## Installation
There are two ways to get Kip up and running.
1. Use the provided Terraform scripts to create a new Kubernetes cluster
with a single Kip node. There are instructions for
[AWS](deploy/terraform-aws/README.md) and
[GCP](deploy/terraform-gcp/README.md).
2. Add Kip to an existing kubernetes cluster. This option is documented below.### Install Kip using an existing cluster
To deploy Kip into an existing cluster, you'll need to setup cloud credentials that allow the Kip provider to manipulate cloud instances, networking and other cloud resources.
**Step 1: Credentials**
In AWS, Kip can either use API keys supplied in the Kip provider configuration file (`provider.yaml`) or use the instance profile of the machine the Kip pod is running on.
On Google Cloud, Kip can use the oauth scopes attached to the k8s node it runs on. Alternatively the user can supply a service account key in provider.yaml.
**AWS Credentials Option 1 - Configuring AWS API keys:**
You can configure the AWS access key Kip will use in your provider configuration, via changing `accessKeyID` and `secretAccessKey` under the `cloud.aws` section. See below on how to create a kustomize overlay with your custom provider configuration.
**AWS Credentials Option 2 - Instance Profile Credentials:**
In AWS, Kip can use credentials supplied by the [instance profile](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html) attached to the node the pod is dispatched to. To use an instance profile, create an IAM policy with the [minimum Kip permissions](docs/kip-iam-permissions.md) then apply the instance profile to the node that will run the Kip provider pod. The Kip pod must run on the cloud instance that the instance profile is attached to.
**GCP Credentials Option 1 - instance service account:**
In GCE, Kip can use the service account attached to an instance. Kip requires https://www.googleapis.com/auth/compute scope in order to launch instances.
**GCP Credentials - Service Account private key:**
Alternatively, Kip can use service account credentials manually supplied in provider.yaml. Add your email and key to `cloud.gce.credentials`. Example:
cloud:
gce:
projectID: "my-project"
credentials:
clientEmail: [email protected]
privateKey: "-----BEGIN PRIVATE KEY-----\n[base64-encoded private key]-----END PRIVATE KEY-----\n"
zone: us-central1-c
vpcName: "default"
subnetName: "default"**Step 2: Apply the manifests**
The resources in [deploy/manifests/kip](deploy/manifests/kip) create ServiceAccounts, Roles and a StatefulSet to run the provider. [Kip is not stateless](docs/state.md), the manifest will also create a PersistentVolumeClaim to store the provider data.
Once credentials are set up, apply [deploy/manifests/kip/base](deploy/manifests/kip/base) to create the necessary kubernetes resources to support and run the provider.
In AWS:
$ kustomize build deploy/manifests/kip/base | kubectl apply -f -
In GCE:
$ kustomize build deploy/manifests/kip/overlays/gcp | kubectl apply -f -
For rendering the manifests, [kustomize](https://kustomize.io/) is used. You can create your own overlays on top of the base template. For example, to override provider.yaml, Kip's configuration file:
$ mkdir -p deploy/manifests/kip/overlay/local-config
$ cp deploy/manifests/kip/base/provider.yaml deploy/manifests/kip/overlay/local-config/provider.yaml
# Edit your provider configuration file.
$ vi deploy/manifests/kip/overlay/local-config/provider.yaml
$ cat > deploy/manifests/kip/overlay/local-config/kustomization.yaml < apiVersion: kustomize.config.k8s.io/v1beta1
> kind: Kustomization
> bases:
> - ../../base
> configMapGenerator:
> - behavior: merge
> files:
> - provider.yaml
> name: config
EOF
$ kustomize build deploy/manifests/kip/overlays/local-config | kubectl apply -f -After applying, you should see a new kip pod in the kube-system namespace and a new node named "kip-provider-0" in the cluster.
## Running Pods on Kip
To assign pods to run on the virtual kubelet node, add the following node selector to the pod spec in manifests.
spec:
nodeSelector:
type: virtual-kubeletIf you enabled taints on your virtual node (they are disabled by default in the example manifests; remove `--disable-taint` from the command line flags to enable), add the necessary tolerations too:
spec:
tolerations:
- key: virtual-kubelet.io/provider
operator: Exists## Uninstall
If you used the provided terraform config for creating your cluster, you can remove the VPC and the cluster via:
terraform destroy -var-file .
If you deployed Kip in an existing cluster, make sure that you first remove all the pods and deployments that have been created by Kip. Then remove the kip statefulset via:
kubectl delete -n kube-system statefulset kip
## Current Status
### Features
- [Networking](docs/networking.md), including host network mode, cluster IPs, DNS, HostPorts and NodePorts
- Pods will be started on a cloud instance that matches the pod resource requests/limits. If no requests/limits are present in the pod spec, Kip will fall back to a default cloud instance type specified in [provider-config.yaml](docs/provider-config.md)
- GPU instances
- Logs
- Exec
- Stats
- Readiness/Liveness probes
- Service account token automounts in pods.
- [Security Groups](docs/security_groups.md)
- Attaching instance profiles to Cells via [annotations](docs/annotations.md)
- The following volume types are supported
- EmptyDir
- ConfigMap
- Secret
- HostPath
- Projected ConfigMaps and Secrets### Limitations
- Stateful workloads and PersistentVolumes are not supported.
- No support for updating ConfigMaps and Secrets for running Pods and Cells.
- Virtual-kubelet has limitations on what it supports in the Downward API, e.g. pod.status.podIP is not supported
- VolumeMounts do not support readOnly, subPath and subPathExpr attributes.
- VolumeMount mountPropagation is always Bidirectional
- Unsupported pod attributes:
- EphemeralContainers
- ReadinessGates
- Lifecycle handlers
- TerminationGracePeriodSeconds
- ActiveDeadlineSeconds
- VolumeDevices
- TerminationMessagePolicy FallbackToLogsOnError is not implemented
- The following PodSecurityContext fields
- FSGroup
- RunAsNonRoot
- ShareProcessNamespace
- HostIPC
- HostPIDWe are actively working on adding missing features. One of the main objectives of the project is to provide full support for all Kubernetes features.
## FAQ
**Q.** I’ve seen the name Milpa mentioned in the logs and source code. What is Milpa?
**A.** Kip’s source code was adapted from an earlier project developed at Elotl called Milpa. We will be migrating away from that name in coming releases. Milpa started out as a stand alone unikernel (and later container) orchestration system and it was natural to move a subset of its functionality into an open source virtual-kubelet provider.
##
**Q.** How long does it take to start a workload?**A.** In AWS and GCE, instances boot in under a minute, usually pods are dispatched to the instance in about 45 seconds. Depending on the size of the container image, a pod will be running in 60 to 90 seconds. In our experience, starting pods in Azure can be a bit slower with startup times between 1.5 to 3 minutes.
##
**Q.** Does it work with the Horizontal Pod Autoscaler and Vertical Pod Autoscaler?**A.** Yes it does. However, to use the VPA with the provider, the pod must be dispatched to a new cloud instance.
##
**Q.** Are DaemonSets supported?**A.** Yes, though they might not work the way intended. The pod will start on a separate cloud instance, and not on the node. It's possible to patch a DaemonSet so it does not get dispatched to the Kip virtual node.
##
**Q.** Are you a [kubernetes conformant](https://github.com/cncf/k8s-conformance) runtime?**A.** We are not 100% conformant at this time but we are working towards getting as close as possible to conformance. Currently Kip passes 70-80% of conformance tests but are hoping to get those values above 90% soon.
##
**Q.** What cloud providers does Kip support?**A.** Kip is currently GA on AWS and GCE. We are actively working on Azure support.
##
**Q.** What components make up the Kip system?**A.** The following repositories are part of the Kip system
* [Itzo](https://github.com/elotl/itzo) containes the cell agent and code for building cell images
* [Tosi](https://github.com/elotl/tosi) for downloading images to cells
* [Cloud-Init](https://github.com/elotl/cloud-init) a minimal cloud-init implementation##
**Q.** We have our custom built image. Can I use it for running cells?**A.** Yes, take a look at [Bring your Own Image](docs/cells.md#bring-your-own-image).
##
Q. Can Kip use a kubeconfig file for API server access?A. Yes, see [kubeconfig](docs/kubeconfig.md) for more information.
## How it Works
* [Cells](docs/cells.md)
* [Networking](docs/networking.md)
* [Annotations](docs/annotations.md)
* [Security Groups](docs/security-groups.md)
* [Provider Configuration](docs/provider-config.md)
* [IAM Permissions](docs/kip-iam-permissions.md)
* [State](docs/state.md)