Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/open-telemetry/opentelemetry-operator

Kubernetes Operator for OpenTelemetry Collector
https://github.com/open-telemetry/opentelemetry-operator

hacktoberfest kubernetes-operator opentelemetry opentelemetry-collector

Last synced: 3 days ago
JSON representation

Kubernetes Operator for OpenTelemetry Collector

Awesome Lists containing this project

README

        

[![Continuous Integration][github-workflow-img]][github-workflow] [![Go Report Card][goreport-img]][goreport] [![GoDoc][godoc-img]][godoc]

# OpenTelemetry Operator for Kubernetes

The OpenTelemetry Operator is an implementation of a [Kubernetes Operator](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/).

The operator manages:

- [OpenTelemetry Collector](https://github.com/open-telemetry/opentelemetry-collector)
- [auto-instrumentation](https://opentelemetry.io/docs/concepts/instrumentation/automatic/) of the workloads using OpenTelemetry instrumentation libraries

## Documentation

- [Compatibility & Support docs](./docs/compatibility.md)
- [API docs](./docs/api.md)
- [Offical Telemetry Operator page](https://opentelemetry.io/docs/kubernetes/operator/)

## Helm Charts

You can install Opentelemetry Operator via [Helm Chart](https://github.com/open-telemetry/opentelemetry-helm-charts/tree/main/charts/opentelemetry-operator) from the opentelemetry-helm-charts repository. More information is available in [here](https://github.com/open-telemetry/opentelemetry-helm-charts/tree/main/charts/opentelemetry-operator).

## Getting started

To install the operator in an existing cluster, make sure you have [`cert-manager` installed](https://cert-manager.io/docs/installation/) and run:

```bash
kubectl apply -f https://github.com/open-telemetry/opentelemetry-operator/releases/latest/download/opentelemetry-operator.yaml
```

Once the `opentelemetry-operator` deployment is ready, create an OpenTelemetry Collector (otelcol) instance, like:

```yaml
kubectl apply -f - < 🚨 **NOTE:** At this point, the Operator does _not_ validate the contents of the configuration file: if the configuration is invalid, the instance will still be created but the underlying OpenTelemetry Collector might crash.

> 🚨 **Note:** For private GKE clusters, you will need to either add a firewall rule that allows master nodes access to port `9443/tcp` on worker nodes, or change the existing rule that allows access to port `80/tcp`, `443/tcp` and `10254/tcp` to also allow access to port `9443/tcp`. More information can be found in the [Official GCP Documentation](https://cloud.google.com/load-balancing/docs/tcp/setting-up-tcp#config-hc-firewall). See the [GKE documentation](https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add_firewall_rules) on adding rules and the [Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/79739) for more detail.

The Operator does examine the configuration file to discover configured receivers and their ports. If it finds receivers with ports, it creates a pair of kubernetes services, one headless, exposing those ports within the cluster. The headless service contains a `service.beta.openshift.io/serving-cert-secret-name` annotation that will cause OpenShift to create a secret containing a certificate and key. This secret can be mounted as a volume and the certificate and key used in those receivers' TLS configurations.

### Upgrades

As noted above, the OpenTelemetry Collector format is continuing to evolve. However, a best-effort attempt is made to upgrade all managed `OpenTelemetryCollector` resources.

In certain scenarios, it may be desirable to prevent the operator from upgrading certain `OpenTelemetryCollector` resources. For example, when a resource is configured with a custom `.Spec.Image`, end users may wish to manage configuration themselves as opposed to having the operator upgrade it. This can be configured on a resource by resource basis with the exposed property `.Spec.UpgradeStrategy`.

By configuring a resource's `.Spec.UpgradeStrategy` to `none`, the operator will skip the given instance during the upgrade routine.

The default and only other acceptable value for `.Spec.UpgradeStrategy` is `automatic`.

### Deployment modes

The `CustomResource` for the `OpenTelemetryCollector` exposes a property named `.Spec.Mode`, which can be used to specify whether the Collector should run as a [`DaemonSet`](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/), [`Sidecar`](https://kubernetes.io/docs/concepts/workloads/pods/#workload-resources-for-managing-pods), [`StatefulSet`](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/) or [`Deployment`](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) (default).

See below for examples of each deployment mode:

- [`Deployment`](https://github.com/open-telemetry/opentelemetry-operator/blob/main/tests/e2e/ingress/00-install.yaml)
- [`DaemonSet`](https://github.com/open-telemetry/opentelemetry-operator/blob/main/tests/e2e/daemonset-features/01-install.yaml)
- [`StatefulSet`](https://github.com/open-telemetry/opentelemetry-operator/blob/main/tests/e2e/smoke-statefulset/00-install.yaml)
- [`Sidecar`](https://github.com/open-telemetry/opentelemetry-operator/blob/main/tests/e2e/smoke-sidecar/00-install.yaml)

#### Sidecar injection

A sidecar with the OpenTelemetry Collector can be injected into pod-based workloads by setting the pod annotation `sidecar.opentelemetry.io/inject` to either `"true"`, or to the name of a concrete `OpenTelemetryCollector`, like in the following example:

```yaml
kubectl apply -f - <
```

- Create an imagePullSecret.

```bash
kubectl create secret docker-registry --docker-server= \
--docker-username=DUMMY_USERNAME --docker-password=DUMMY_DOCKER_PASSWORD \
--docker-email=DUMMY_DOCKER_EMAIL
```

- Add image pull secret to service account

```bash
kubectl patch serviceaccount -p '{"imagePullSecrets": [{"name": ""}]}'
```

### OpenTelemetry auto-instrumentation injection

The operator can inject and configure OpenTelemetry auto-instrumentation libraries. Currently Apache HTTPD, DotNet, Go, Java, Nginx, NodeJS and Python are supported.

To use auto-instrumentation, configure an `Instrumentation` resource with the configuration for the SDK and instrumentation.

```yaml
kubectl apply -f - < **Note:** For `DotNet` auto-instrumentation, by default, operator sets the `OTEL_DOTNET_AUTO_TRACES_ENABLED_INSTRUMENTATIONS` environment variable which specifies the list of traces source instrumentations you want to enable. The value that is set by default by the operator is all available instrumentations supported by the `openTelemery-dotnet-instrumentation` release consumed in the image, i.e. `AspNet,HttpClient,SqlClient`. This value can be overriden by configuring the environment variable explicitly.

#### Multi-container pods with single instrumentation

If nothing else is specified, instrumentation is performed on the first container available in the pod spec.
In some cases (for example in the case of the injection of an Istio sidecar) it becomes necessary to specify on which container(s) this injection must be performed.

For this, it is possible to fine-tune the pod(s) on which the injection will be carried out.

For this, we will use the `instrumentation.opentelemetry.io/container-names` annotation for which we will indicate one or more container names (`.spec.containers.name`) on which the injection must be made:

```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment-with-multiple-containers
spec:
selector:
matchLabels:
app: my-pod-with-multiple-containers
replicas: 1
template:
metadata:
labels:
app: my-pod-with-multiple-containers
annotations:
instrumentation.opentelemetry.io/inject-java: "true"
instrumentation.opentelemetry.io/container-names: "myapp,myapp2"
spec:
containers:
- name: myapp
image: myImage1
- name: myapp2
image: myImage2
- name: myapp3
image: myImage3
```

In the above case, `myapp` and `myapp2` containers will be instrumented, `myapp3` will not.

> 🚨 **NOTE**: Go auto-instrumentation **does not** support multicontainer pods. When injecting Go auto-instrumentation the first pod should be the only pod you want instrumented.

#### Multi-container pods with multiple instrumentations

Works only when `enable-multi-instrumentation` flag is `true`.

Annotations defining which language instrumentation will be injected are required. When feature is enabled, specific for Instrumentation language containers annotations are used:

Java:

```bash
instrumentation.opentelemetry.io/java-container-names: "java1,java2"
```

NodeJS:

```bash
instrumentation.opentelemetry.io/nodejs-container-names: "nodejs1,nodejs2"
```

Python:

```bash
instrumentation.opentelemetry.io/python-container-names: "python1,python3"
```

DotNet:

```bash
instrumentation.opentelemetry.io/dotnet-container-names: "dotnet1,dotnet2"
```

Go:

```bash
instrumentation.opentelemetry.io/go-container-names: "go1"
```

ApacheHttpD:

```bash
instrumentation.opentelemetry.io/apache-httpd-container-names: "apache1,apache2"
```

NGINX:

```bash
instrumentation.opentelemetry.io/inject-nginx-container-names: "nginx1,nginx2"
```

SDK:

```bash
instrumentation.opentelemetry.io/sdk-container-names: "app1,app2"
```

If language instrumentation specific container names are not specified, instrumentation is performed on the first container available in the pod spec (only if single instrumentation injection is configured).

In some cases containers in the pod are using different technologies. It becomes necessary to specify language instrumentation for container(s) on which this injection must be performed.

For this, we will use language instrumentation specific container names annotation for which we will indicate one or more container names (`.spec.containers.name`) on which the injection must be made:

```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment-with-multi-containers-multi-instrumentations
spec:
selector:
matchLabels:
app: my-pod-with-multi-containers-multi-instrumentations
replicas: 1
template:
metadata:
labels:
app: my-pod-with-multi-containers-multi-instrumentations
annotations:
instrumentation.opentelemetry.io/inject-java: "true"
instrumentation.opentelemetry.io/java-container-names: "myapp,myapp2"
instrumentation.opentelemetry.io/inject-python: "true"
instrumentation.opentelemetry.io/python-container-names: "myapp3"
spec:
containers:
- name: myapp
image: myImage1
- name: myapp2
image: myImage2
- name: myapp3
image: myImage3
```

In the above case, `myapp` and `myapp2` containers will be instrumented using Java and `myapp3` using Python instrumentation.

**NOTE**: Go auto-instrumentation **does not** support multicontainer pods. When injecting Go auto-instrumentation the first container should be the only you want to instrument.

**NOTE**: This type of instrumentation **does not** allow to instrument a container with multiple language instrumentations.

**NOTE**: `instrumentation.opentelemetry.io/container-names` annotation is not used for this feature.

#### Use customized or vendor instrumentation

By default, the operator uses upstream auto-instrumentation libraries. Custom auto-instrumentation can be configured by
overriding the `image` fields in a CR.

```yaml
apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
metadata:
name: my-instrumentation
spec:
java:
image: your-customized-auto-instrumentation-image:java
nodejs:
image: your-customized-auto-instrumentation-image:nodejs
python:
image: your-customized-auto-instrumentation-image:python
dotnet:
image: your-customized-auto-instrumentation-image:dotnet
go:
image: your-customized-auto-instrumentation-image:go
apacheHttpd:
image: your-customized-auto-instrumentation-image:apache-httpd
nginx:
image: your-customized-auto-instrumentation-image:nginx
```

The Dockerfiles for auto-instrumentation can be found in [autoinstrumentation directory](./autoinstrumentation).
Follow the instructions in the Dockerfiles on how to build a custom container image.

#### Using Apache HTTPD autoinstrumentation

For `Apache HTTPD` autoinstrumentation, by default, instrumentation assumes httpd version 2.4 and httpd configuration directory `/usr/local/apache2/conf` as it is in the official `Apache HTTPD` image (f.e. docker.io/httpd:latest). If you need to use version 2.2, or your HTTPD configuration directory is different, and or you need to adjust agent attributes, customize the instrumentation specification per following example:

```yaml
apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
metadata:
name: my-instrumentation
apache:
image: your-customized-auto-instrumentation-image:apache-httpd
version: 2.2
configPath: /your-custom-config-path
attrs:
- name: ApacheModuleOtelMaxQueueSize
value: "4096"
- name: ...
value: ...
```

List of all available attributes can be found at [otel-webserver-module](https://github.com/open-telemetry/opentelemetry-cpp-contrib/tree/main/instrumentation/otel-webserver-module)

#### Using Nginx autoinstrumentation

For `Nginx` autoinstrumentation, Nginx versions 1.22.0, 1.23.0, and 1.23.1 are supported at this time. The Nginx configuration file is expected to be `/etc/nginx/nginx.conf` by default, if it's different, see following example on how to change it. Instrumentation at this time also expects, that `conf.d` directory is present in the directory, where configuration file resides and that there is a `include /conf.d/*.conf;` directive in the `http { ... }` section of Nginx configuration file (like it is in the default configuration file of Nginx). You can also adjust OpenTelemetry SDK attributes. Example:

```yaml
apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
metadata:
name: my-instrumentation
nginx:
image: your-customized-auto-instrumentation-image:nginx # if custom instrumentation image is needed
configFile: /my/custom-dir/custom-nginx.conf
attrs:
- name: NginxModuleOtelMaxQueueSize
value: "4096"
- name: ...
value: ...
```

List of all available attributes can be found at [otel-webserver-module](https://github.com/open-telemetry/opentelemetry-cpp-contrib/tree/main/instrumentation/otel-webserver-module)

#### Inject OpenTelemetry SDK environment variables only

You can configure the OpenTelemetry SDK for applications which can't currently be autoinstrumented by using `inject-sdk` in place of `inject-python` or `inject-java`, for example. This will inject environment variables like `OTEL_RESOURCE_ATTRIBUTES`, `OTEL_TRACES_SAMPLER`, and `OTEL_EXPORTER_OTLP_ENDPOINT`, that you can configure in the `Instrumentation`, but will not actually provide the SDK.

```bash
instrumentation.opentelemetry.io/inject-sdk: "true"
```

#### Controlling Instrumentation Capabilities

The operator allows specifying, via the flags, which languages the Instrumentation resource may instrument.
If a language is enabled by default its gate only needs to be supplied when disabling the gate.
Language support can be disabled by passing the flag with a value of `false`.

| Language | Gate | Default Value |
| ----------- | ------------------------------------- | ------------- |
| Java | `enable-java-instrumentation` | `true` |
| NodeJS | `enable-nodejs-instrumentation` | `true` |
| Python | `enable-python-instrumentation` | `true` |
| DotNet | `enable-dotnet-instrumentation` | `true` |
| ApacheHttpD | `enable-apache-httpd-instrumentation` | `true` |
| Go | `enable-go-instrumentation` | `false` |
| Nginx | `enable-nginx-instrumentation` | `false` |

OpenTelemetry Operator allows to instrument multiple containers using multiple language specific instrumentations.
These features can be enabled using the `enable-multi-instrumentation` flag. By default flag is `false`.

For more information about multi-instrumentation feature capabilities please see [Multi-container pods with multiple instrumentations](#Multi-container-pods-with-multiple-instrumentations).

### Target Allocator

The OpenTelemetry Operator comes with an optional component, the [Target Allocator](/cmd/otel-allocator/README.md) (TA). When creating an OpenTelemetryCollector Custom Resource (CR) and setting the TA as enabled, the Operator will create a new deployment and service to serve specific `http_sd_config` directives for each Collector pod as part of that CR. It will also rewrite the Prometheus receiver configuration in the CR, so that it uses the deployed target allocator. The following example shows how to get started with the Target Allocator:

```yaml
kubectl apply -f - <