{"id":18637325,"url":"https://github.com/openshift/cluster-node-tuning-operator","last_synced_at":"2025-04-05T02:12:13.094Z","repository":{"id":37271421,"uuid":"150154353","full_name":"openshift/cluster-node-tuning-operator","owner":"openshift","description":"Manage node-level tuning by orchestrating the tuned daemon.","archived":false,"fork":false,"pushed_at":"2024-04-14T06:43:11.000Z","size":46446,"stargazers_count":94,"open_issues_count":35,"forks_count":102,"subscribers_count":18,"default_branch":"master","last_synced_at":"2024-04-14T06:53:12.871Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Go","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/openshift.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null}},"created_at":"2018-09-24T19:05:32.000Z","updated_at":"2024-04-15T14:11:01.982Z","dependencies_parsed_at":"2023-10-02T19:16:29.388Z","dependency_job_id":"47bbc9f2-2c5a-4013-b8cb-2e871c3208ca","html_url":"https://github.com/openshift/cluster-node-tuning-operator","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/openshift%2Fcluster-node-tuning-operator","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/openshift%2Fcluster-node-tuning-operator/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/openshift%2Fcluster-node-tuning-operator/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/openshift%2Fcluster-node-tuning-operator/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/openshift","download_url":"https://codeload.github.com/openshift/cluster-node-tuning-operator/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247276189,"owners_count":20912288,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-11-07T05:35:21.651Z","updated_at":"2025-04-05T02:12:12.907Z","avatar_url":"https://github.com/openshift.png","language":"Go","readme":"# Node Tuning Operator\n\nThe Node Tuning Operator (NTO) manages cluster node-level tuning for\n[OpenShift](https://openshift.io/).\n\nThe majority of high-performance applications require some\nlevel of kernel tuning. The Operator provides a unified\nmanagement interface to users of node-level sysctls and more\nflexibility to add [custom tuning](#custom-tuning-specification)\nspecified by user needs. The Operator manages the containerized\n[TuneD](https://github.com/redhat-performance/tuned/)\ndaemon for [OpenShift](https://openshift.io/) as\na Kubernetes DaemonSet. It ensures [custom tuning\nspecification](#custom-tuning-specification) is passed to all\ncontainerized TuneD daemons running in the cluster in the format\nthat the daemons understand. The daemons run on all nodes in the\ncluster, one per node.\n\nWhen a profile is changed, the containerized TuneD daemon will roll back\nany changes to node-level settings before applying the new profile. The\ncontainerized TuneD daemon handles termination signals by rolling back any\nnode-level settings it has applied before gracefully shutting down.\n\n## Perfomance Profile Controller\n[Performance Profile Controller](docs/performanceprofile/performance_controller.md), \npreviously known as Performance Addon Operator and now a part of the Node Tuning Operator, \noptimizes OpenShift clusters for applications sensitive to cpu and network latency.\n\n## Deploying the Node Tuning Operator\n\nThe Operator is deployed by applying the `*.yaml` manifests in the Operator's\n`/manifests` directory in alphanumeric order. It automatically creates a default deployment\nand custom resource\n([CR](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/))\nfor the TuneD daemons. The following shows the default CR created\nby the Operator on a cluster with the Operator installed.\n\n```\n$ oc get Tuned -n openshift-cluster-node-tuning-operator\nNAME      AGE\ndefault   1h\n```\n\nThe default CR is meant for delivering standard node-level tuning for\nthe OpenShift platform and it can only be modified to set the Operator\nManagement state. Any other custom changes to the default CR will be\noverwritten by the Operator. For custom tuning, create your own Tuned CRs.\nNewly created CRs will be combined with the default CR and custom tuning\napplied to OpenShift nodes based on node or pod labels and profile priorities.\n\nWhile in certain situations the support for pod labels can be a convenient\nway of automatically delivering required tuning, this practice is discouraged\nand strongly advised against especially in large-scale clusters. The default\nTuned CR ships without pod label matching. If a custom profile is created\nwith pod label matching the functionality will be enabled at that time.\n\n\n## Custom tuning specification\n\nFor an example of a tuning specification, refer to\n`/assets/tuned/manifests/default-cr-tuned.yaml` in the Operator's directory or to\nthe resource created in a live cluster by:\n\n```\n$ oc get Tuned/default -o yaml\n```\n\nThe CR for the Operator has two major sections. The first\nsection, `profile:`, is a list of TuneD profiles and their names. The\nsecond, `recommend:`, defines the profile selection logic.\n\nMultiple custom tuning specifications can co-exist as multiple CRs\nin the Operator's namespace. The existence of new CRs or the deletion\nof old CRs is detected by the Operator. All existing custom tuning\nspecifications are merged and appropriate objects for the containerized\nTuneD daemons are updated.\n\n### Management state\n\nThe Operator Management state is set by adjusting the default Tuned CR.\nBy default, the Operator is in the Managed state and the `spec.managementState`\nfield is not present in the default Tuned CR. Valid values for the Operator\nManagement state are as follows:\n\n  * Managed: the Operator will update its operands as configuration resources are updated\n  * Unmanaged: the Operator will ignore changes to the configuration resources\n  * Removed: the Operator will remove its operands and resources the Operator provisioned\n\n\n### Profile data\n\nThe `profile:` section lists TuneD profiles and their names.\n\n```\n  profile:\n  - name: tuned_profile_1\n    data: |\n      # TuneD profile specification\n      [main]\n      summary=Description of tuned_profile_1 profile\n\n      [sysctl]\n      net.ipv4.ip_forward=1\n      # ... other sysctl's or other TuneD daemon plug-ins supported by the containerized TuneD\n\n  # ...\n\n  - name: tuned_profile_n\n    data: |\n      # TuneD profile specification\n      [main]\n      summary=Description of tuned_profile_n profile\n\n      # tuned_profile_n profile settings\n```\n\nRefer to a list of\n[TuneD plug-ins supported by the Operator](#supported-tuned-daemon-plug-ins).\n\n\n### Recommended profiles\n\nThe `profile:` selection logic is defined by the `recommend:` section of the CR.\nThe `recommend:` section is a list of items to recommend the profiles based on\na selection criteria.\n\n```\n  recommend:\n  \u003crecommend-item-1\u003e\n  # ...\n  \u003crecommend-item-n\u003e\n```\n\nThe individual items of the list:\n\n```\n  - machineConfigLabels:                # optional\n      \u003cmcLabels\u003e                        # a dictionary of key/value MachineConfig labels; the keys must be unique\n    match:                              # optional; if omitted, profile match is assumed unless a profile with a higher priority matches first or 'machineConfigLabels' is set\n    \u003cmatch\u003e                             # an optional list\n    priority: \u003cpriority\u003e                # profile ordering priority, lower numbers mean higher priority (0 is the highest priority)\n    profile: \u003ctuned_profile_name\u003e       # a TuneD profile to apply on a match; for example tuned_profile_1\n    operand:\t\t\t\t# optional operand configuration\n      debug: \u003cbool\u003e\t\t\t# turn debugging on/off for the TuneD daemon: true/false (default is false)\n      tunedConfig:\t\t\t# global configuration for the TuneD daemon as defined in tuned-main.conf\n        reapply_sysctl: \u003cbool\u003e\t\t# turn reapply_sysctl functionality on/off for the TuneD daemon: true/false\n```\n\nIf `\u003cmatch\u003e` is omitted, a profile match (i.e. _true_) is assumed.\n\n`\u003cmatch\u003e` is an optional list recursively defined as follows:\n\n```\n    - label: \u003clabel_name\u003e     # node or pod label name\n      value: \u003clabel_value\u003e    # optional node or pod label value; if omitted, the presence of \u003clabel_name\u003e is enough to match\n      type: \u003clabel_type\u003e      # optional node or pod type (\"node\" or \"pod\"); if omitted, \"node\" is assumed\n      \u003cmatch\u003e                 # an optional \u003cmatch\u003e list\n```\n\nIf `\u003cmatch\u003e` is not omitted, all nested `\u003cmatch\u003e` sections must\nalso evaluate to _true_. Otherwise, _false_ is assumed and the\nprofile with the respective `\u003cmatch\u003e` section will not be applied or\nrecommended. Therefore, the nesting (child `\u003cmatch\u003e` sections) works as logical\nAND operator. Conversely, if any item of the `\u003cmatch\u003e` list matches,\nthe entire `\u003cmatch\u003e` list evaluates to _true_. Therefore, the list\nacts as logical OR operator.\n\nIf `machineConfigLabels` is defined, MachineConfigPool based matching is turned on\nfor the given `recommend:` list item. `\u003cmcLabels\u003e` specifies the labels\nfor a MachineConfig. The MachineConfig is created automatically to apply host settings, such as\nkernel boot parameters, for the profile `\u003ctuned_profile_name\u003e`. This involves\nfinding all MachineConfigPools with machineConfigSelector matching\n`\u003cmcLabels\u003e` and setting the profile `\u003ctuned_profile_name\u003e` on all nodes that\nare assigned the found MachineConfigPools.\n\nThe list items `match` and `machineConfigLabels` are connected by the logical OR operator.\nThe `match` item is evaluated first in a short-circuit manner. Therefore, if it evaluates to\n`true`, `machineConfigLabels` item is not considered.\n\n\n#### Example\n\n```\n  - match:\n    - label: tuned.openshift.io/elasticsearch\n      match:\n      - label: node-role.kubernetes.io/master\n      - label: node-role.kubernetes.io/infra\n      type: pod\n    priority: 10\n    profile: openshift-control-plane-es\n  - match:\n    - label: node-role.kubernetes.io/master\n    - label: node-role.kubernetes.io/infra\n    priority: 20\n    profile: openshift-control-plane\n  - priority: 30\n    profile: openshift-node\n```\n\nThe CR above is translated for the containerized TuneD daemon into\nits recommend.conf file based on the profile priorities. The profile\nwith the highest priority (10) is openshift-control-plane-es and,\ntherefore, it is considered first. The containerized TuneD daemon\nrunning on a given node looks to see if there is a pod running on the\nsame node with the `tuned.openshift.io/elasticsearch` label set. If not,\nthe entire `\u003cmatch\u003e` section evaluates as _false_. If there is such a\npod with the label, in order for the `\u003cmatch\u003e` section to evaluate to\n_true_, the node label also needs to be `node-role.kubernetes.io/master`\nOR `node-role.kubernetes.io/infra`.\n\nIf the labels for the profile with priority 10 matched,\nopenshift-control-plane-es profile is applied and no other profile is\nconsidered. If the node/pod label combination did not match,\nthe second highest priority profile (openshift-control-plane) is considered.\nThis profile is applied if the containerized TuneD pod runs on a node with\nlabels `node-role.kubernetes.io/master` OR `node-role.kubernetes.io/infra`.\n\nFinally, the profile `openshift-node` has the lowest priority of 30.\nIt lacks the `\u003cmatch\u003e` section and, therefore, will always match. It\nacts as a profile catch-all to set openshift-node profile, if no other\nprofile with higher priority matches on a given node.\n\n### Example\n\nThe following CR applies custom node-level tuning for\nOpenShift nodes that run an ingress pod with label\n`tuned.openshift.io/ingress-pod-label=ingress-pod-label-value`.\nAs an administrator, use the following command to create a custom Tuned CR.\n\n```\noc create -f- \u003c\u003c_EOF_\napiVersion: tuned.openshift.io/v1\nkind: Tuned\nmetadata:\n  name: ingress\n  namespace: openshift-cluster-node-tuning-operator\nspec:\n  profile:\n  - data: |\n      [main]\n      summary=A custom OpenShift ingress profile\n      include=openshift-control-plane\n      [sysctl]\n      net.ipv4.ip_local_port_range=\"1024 65535\"\n      net.ipv4.tcp_tw_reuse=1\n    name: openshift-ingress\n  recommend:\n  - match:\n    - label: tuned.openshift.io/ingress-pod-label\n      value: \"ingress-pod-label-value\"\n      type: pod\n    priority: 10\n    profile: openshift-ingress\n_EOF_\n```\n\n\n## Supported TuneD daemon plug-ins\n\nAside from the `[main]` section, the following\n[TuneD plug-ins](https://github.com/redhat-performance/tuned/tree/master/tuned/plugins)\nare supported when using [custom profiles](#custom-tuning-specification) defined\nin the `profile:` section of the Tuned CR:\n\n* audio\n* cpu\n* disk\n* eeepc_she\n* modules\n* mounts\n* net\n* rtentsk\n* scheduler\n* scsi_host\n* selinux\n* service\n* sysctl\n* sysfs\n* systemd\n* usb\n* video\n* vm\n\nwith the exception of dynamic tuning functionality provided by some of the plug-ins.\nThe following TuneD plug-ins are currently not fully supported:\n\n* bootloader\n* script\n\n\n## Additional tuning on fully-managed hosts\nSupport for the [stall daemon](https://github.com/bristot/stalld)\n(stalld) has been added to complement tuning performed by TuneD realtime\nprofiles. Currently, only hosts fully-managed by the\n[Machine Config Operator](https://github.com/openshift/machine-config-operator)\n(MCO) can benefit from this functionality. To deploy stalld on such hosts,\nthe following line needs to be added to the TuneD service plugin.\n\n```\nservice.stalld=start,enable\n```\n\nA host-supplied configuration file can be used to override the stalld systemd\nunit created when using the line above in the TuneD service plugin. This\nfile can then be referred to by prefixing the absolute path to the overlay\nfile on the host by the `/host` prefix.\n\n```\nservice.stalld=start,enable,file:/host\u003cabsolute_path_to_the_overlay_file_on_the_host\u003e\n```\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fopenshift%2Fcluster-node-tuning-operator","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fopenshift%2Fcluster-node-tuning-operator","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fopenshift%2Fcluster-node-tuning-operator/lists"}