Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/noahdietz/scalar
Automate management of Kubernetes HPAs for Deployments & ReplicationControllers
https://github.com/noahdietz/scalar
Last synced: 2 months ago
JSON representation
Automate management of Kubernetes HPAs for Deployments & ReplicationControllers
- Host: GitHub
- URL: https://github.com/noahdietz/scalar
- Owner: noahdietz
- License: apache-2.0
- Created: 2016-10-17T04:00:30.000Z (about 8 years ago)
- Default Branch: master
- Last Pushed: 2016-10-20T19:09:54.000Z (about 8 years ago)
- Last Synced: 2024-06-20T16:52:23.703Z (6 months ago)
- Language: Go
- Homepage:
- Size: 4.71 MB
- Stars: 12
- Watchers: 3
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# scalar
The Kubernetes `HorizontalPodAutoscaler` controller you've been missing.
## When do I use scalar?
Simple, if you use `Deployments` or `ReplicationControllers`, have `heapster` configured, and don't want to manually scale pods (or manually create the artifacts to automatically do so), you use `scalar`.
Kubernetes has a built in autscaling feature in the `HorizontalPodAutoscaler`. This native resource monitors a `Deployment's` or `ReplicationController's` CPU usage, and when the average among all related `Pods` reaches a pre-configured threshold, more `Pods` are created to lower it. Find more info in the [k8s docs](http://kubernetes.io/docs/user-guide/horizontal-pod-autoscaling/).So, rather than having to manually monitor usage of a `Deployment's` `Pods` and use a command like `kubectl scale deployment nginx --replicas=5`, use a `HorizontalPodAutoscaler` to automate it. And even better than manually creating one of these for every deployment a coworker or tenant makes like so `kubectl autoscale deployment nginx --min=2 --max=10`, let `scalar` do it for you!
## What does scalar actually do?
Again, it's simple. `scalar` doesn't do anything special. It just lives its monotonous life, watching for `Deployment` and `ReplicationController` creation and deletion events in your cluster. When one of these is created, `scalar` springs into action, creating a `HorizontalPodAutoscaler` for the newly created artifact and adds a reference of it to its cache. While `scalar` is humming along, it will give periodic status updates of your cluster's scaling activities by logging the status of each active `HorizontalPodAutoscaler` (even those it didn't create). The frequency of said updates is configurable, and "togglable". Finally, when a `Deployment` or `ReplicationController` is deleted, if `scalar` had created a `HorizontalPodAutoscaler` for it, this will be deleted as well, and removed from the cache.
## How does it do the stuff?
Keeping the trend of simplicity, `scalar's` implementation intends to be just that. Using the Kubernetes `watch` interface ([docs here](https://godoc.org/k8s.io/client-go/1.4/pkg/watch)), `scalar` follows all CRUD events for `Deployments` and `ReplicationControllers`, filtering out the events it doesn't care about, and reacting to the ones it does. The implementation uses the `kubernetes/client-go` package to generate an in-cluster configuration used to communicate with the k8s `apiserver` to do all the things and stuff ([docs here](https://github.com/kubernetes/client-go)).
## Are there any cluster dependencies?
In order for `scalar` to be at all effective, you need a few things. First, and most important, you need a functioning set-up of `heapster` in your cluster. If you don't have this, follow the instructions layed out [here](https://github.com/kubernetes/heapster).
Secondly, `scalar` needs to run in a namespace where the default `ServiceAccount` has authorization to watch resource events and create/delete resources everywhere necessary. This is an out of the box configuration item, if you and your cluster have a different authz pattern, do your thang!## Let's deploy this thing!
This part is easy. Copy the manifest in [master](https://github.com/noahdietz/scalar/blob/master/scalar.yaml), configure it as you see fit, deploy:
```sh
> kubectl create -f scalar.yaml
```Verify that it worked:
```sh
> kubectl get pods
NAME READY STATUS RESTARTS AGE
scalar-700111452-0vwas 1/1 Running 0 10s
```Check that `scalar` is happy:
```sh
> kubectl logs scalar-700111452-0vwas
2016/10/17 03:56:22 Scalar is configured and ready to scale!
```Test that it works:
```sh
> kubectl run nginx --image=nginx --port=80
deployment "nginx" created
> kubectl logs scalar-700111452-0vwas
2016/10/17 03:56:22 Scalar is configured and ready to scale!
2016/10/17 03:57:12 Creating horizontal pod autoscaler for deployment nginx default
```You're good to go! `scalar` is scaling away for you!
## What do the status updates look like?
It's ok to want to see what you are getting, turn it off if you think it is annoying. Here is what a status update might look like:
```sh
2016/10/17 03:57:12 Status for HPA "nginx" in default: | ObservedGeneration: 0 | LastScaleTime: | CurrentReplicas: 2 | DesiredReplicas: 2 | CurrentCPUUtilizationPercentage: 10 |
```## What knobs can I turn?
Glad you asked! Here is a list of available configuration values.
| Env Var | Default | Description |
| ------- | -------:| -----------:|
|`SCALAR_SELECTOR` | `""` | A label selector used to filter the watcher events to specifically labeled objects |
|`SCALAR_PRINT_STATUS` | `"true"` | Flag that toggles printing of autocaling statuses |
|`SCALAR_STATUS_TIMER` | `"1800"` | Frequency in seconds of autoscaling status print |
|`SCALAR_MIN_REPLICAS` | `"2"` | Minimum number of active replicas in a Deployment/ReplicationController |
|`SCALAR_MAX_REPLICAS` | `"8"` | Maximum number of active replicas in a Deployment/ReplicationController |
|`SCALAR_TARGET_CPU` | `"75"` | Target CPU Utilization Percentage for the autoscaling threshold |---
If you have any questions or want to contribute, open an issue or PR in the repo! Thank you and have fun with `scalar`!