Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/carv-ics-forth/frisbee

Kubernetes-native framework for declarative testing of distributed systems.
https://github.com/carv-ics-forth/frisbee

benchmarking-suite chaos-engineering e2e-testing fault-detection-and-diagnosis kubernetes load-testing scalability-analysis testing

Last synced: 3 days ago
JSON representation

Kubernetes-native framework for declarative testing of distributed systems.

Awesome Lists containing this project

README

        

## Frisbee - A Test Automation Framework For Kubernetes






Website |
Blog |
Docs |
Contact



















## What is Frisbee ?

**Frisbee** is the first Kubernetes framework designed to support **declarative end-to-end system testing** of containerized applications.

:heavy_check_mark: Design end-to-end testing scenarios in an intuitive language!

:heavy_check_mark: Run the actual production software - no mocks!

:heavy_check_mark: Replay complex workloads through automated workflows!

:heavy_check_mark: Combine Chaos Engineering with large-scale performance testing!

:heavy_check_mark: Verify program behavior through programmable assertions!

To learn more about Frisbee, check the **[Walkthrough](https://frisbee.dev/docs/walkthrough)** tutorial or visit our **[Website](https://frisbee.dev)**.

## Testing Patterns

In declarative testing, a test scenario focuses on what to accomplish rather than on the imperative details of how to manipulate the state of an application under test and verify the final application state against an expected state.

This approach not make tests more *readable, maintainable, and reproducible*, but it also help devops in identifying testing patterns.

Here, you can see some testing patterns we have identified across different application domains.

👉 [Databases](./examples/patterns/databases)

👉 [Federated Learning](./examples/patterns/federated-learning)

👉 [HPC](./examples/patterns/hpc)

👉 [CI](./examples/patterns/ci)

## Getting Started

### Step 1 – Prerequsities:

To run Frisbee you must have access to a running Kubernetes cluster and ensure that [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/)
and [Helm](https://helm.sh/docs/intro/install/) are installed on your system.

For quick testing, you can use [microk8s](https://microk8s.io/docs/getting-started) to deploy a local Kubernetes cluster on your machine.

```shell
sudo microk8s config > ~/.kube/config
sudo microk8s enable dns
sudo microk8s enable ingress
sudo microk8s enable helm3
sudo snap alias microk8s.kubectl kubectl
sudo snap alias microk8s.helm3 helm
```

### Step 2 – Install Frisbee on K8s cluster:

Firstly, we need to download the Frisbee CLI:

```shell
curl -sSLf https://frisbee.dev/install.sh | sudo bash
export PATH=$PATH:/usr/local/bin
```

Since the majority of Frisbee operations happens through this CLI, it is suggested to enable the autocompletion.

```shell
source <(kubectl-frisbee completion bash)
```

> As usual, use `tab twice` to get the CLI fields auto-completed.

The next step is to install Frisbee in your Kubernetes cluster.

```shell
kubectl-frisbee install production
kubectl logs -l control-plane=frisbee-operator -n frisbee --follow
```

Finally, you can download the Frisbee project to get access to the ready-to-use examples.

```shell
git clone [email protected]:CARV-ICS-FORTH/frisbee.git
cd frisbee
```

### Step 3 – Submit a testing job:

To submit a testing job, the general syntax is:

```shell
kubectl-frisbee submit test
```

Conventionally, we separate the test-cases from the templates of the system under evaluation.

* [examples](examples): contains a list of test-cases.
* [charts](charts): contains Helm charts that provide templates used in the test-cases.

Let's try to run a scenario from the tutorial.

```shell
kubectl-frisbee submit test demo- ./examples/tutorial/15.performance-monitoring.yml ./examples/apps/iperf2/ ./charts/system/
```
* **demo-**: demo is the naming prefix, and the `-` indicate an autogenerated postfix (e.g, demo-326)
* [./examples/tutorial/15.performance-monitoring.yml](./examples/tutorial/15.performance-monitoring.yml): the scenario executes the iperf benchmark and the monitoring stack for observing its execution.
* [./examples/apps/iperf2](./examples/apps/iperf2): provides the application templates used within the scenario.
* [./charts/system](./charts/system): provides system-wide templates for the telemetry stack, chaos injection, etc.



#### Inspect Submitted Jobs:

To get a list of submitted tests, use:

```shell
kubectl frisbee get tests
```



Note that every test-case runs on a dedicated namespace (named after the test). To further dive into execution details
use:

```shell
kubectl frisbee inspect tests demo-326
```



### Step 4 – Live Progress Monitoring:

The last section of `inspect` provides the URLs for accessing Prometheus/Grafana.

Note that every scenario has its own monitoring stack in order to avoid interfering metrics.

```shell
firefox $(kubectl frisbee inspect tests demo-326 | grep grafana- | awk '{print $3'})
```




In contrast to the vanilla Grafana which plots only the performance metrics, Frisbee
provides `Contextualized Visualizations` that contain information for:

* Joining nodes (blue vertical lines)
* Exiting nodes (orange vertical lines)
* Fault-Injection (red ranges)

Information like that helps in `root-cause analysis`, as it makes it easy to correlate
an `observed behavior back to a testing event`.

For example, in the next figure, it fairly easy to understand that `INSERT_ERROR` messages (`yellow line`) are triggered
by a `fault-injection event`.



### Step 5 – Auto-generate test reports:

Finally, Frisbee provides an advanced functionality for auto-generating reports for the tests.

```shell
kubectl-frisbee report test demo-326 ~/frisbee-reports --pdf --force
```

This will create report on `~/frisbee-reports` directory including the pdf from Grafana.

## Features

👉 Workflow templating to store commonly used workflows in the cluster.

👉 DAG based declaration of testing workflows.

👉 Step level input & outputs (template parameterization).

👉 Conditional Execution (Time-Driven, Status-Driven, Performance-Driven).

👉 Live Progress monitoring via Prometheus/Grafana.

👉 Assertions and alerting of SLA violations.

👉 Placement Policies (affinity/tolerations/node selectors).

👉 Archiving Test results after executing for later access.

👉 Garbage collection of completed resources.

👉 Chaos-Engineering and Fault-Injection via Chaos-Mesh.

👉 On-Demand reliable container attached storage.

👉 CLI applications to test management and test inspection.

## Citation

If you publish work that uses Frisbee, please cite Frisbee as follows:

```bibtex
@article{nikolaidis2021frisbee,
title={Frisbee: automated testing of Cloud-native applications in Kubernetes},
author={Nikolaidis, Fotis and Chazapis, Antony and Marazakis, Manolis and Bilas, Angelos},
journal={arXiv preprint arXiv:2109.10727},
year={2021}
}
```

## Contributing to Frisbee

We welcome contributions. Please see [CONTRIBUTING.md](CONTRIBUTING.md) to get started!

## Acknowledgements

This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant
agreement No. 894204 (Ether, H2020-MSCA-IF-2019).