{"id":18637014,"url":"https://github.com/openshift/assisted-test-infra","last_synced_at":"2025-10-26T16:31:27.161Z","repository":{"id":36965040,"uuid":"275044265","full_name":"openshift/assisted-test-infra","owner":"openshift","description":null,"archived":false,"fork":false,"pushed_at":"2025-04-01T12:03:37.000Z","size":4408,"stargazers_count":42,"open_issues_count":11,"forks_count":108,"subscribers_count":18,"default_branch":"master","last_synced_at":"2025-04-01T12:25:22.485Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/openshift.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2020-06-26T00:48:08.000Z","updated_at":"2025-04-01T11:13:22.000Z","dependencies_parsed_at":"2023-10-03T17:44:17.345Z","dependency_job_id":"8e490413-a1fc-490c-892d-4bf543b853c9","html_url":"https://github.com/openshift/assisted-test-infra","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/openshift%2Fassisted-test-infra","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/openshift%2Fassisted-test-infra/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/openshift%2Fassisted-test-infra/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/openshift%2Fassisted-test-infra/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/openshift","download_url":"https://codeload.github.com/openshift/assisted-test-infra/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247182401,"owners_count":20897381,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-11-07T05:32:33.534Z","updated_at":"2025-10-26T16:31:27.155Z","avatar_url":"https://github.com/openshift.png","language":"Python","readme":"**Table of contents**\n\n- [⚠️ Warning ⚠️](docs/warning.md)\n- [Overview](docs/overview.md)\n- [Prerequisites](docs/prerequisites)\n- [getting-started](docs/getting-started.md)\n  - [Deployment parameters](#deployment-parameters)\n    - [Components](#components)\n    - [Deployment config](#deployment-config)\n    - [Cluster configmap](#cluster-configmap)\n  - [Installation parameters](#installation-parameters)\n  - [Vsphere parameters](#vsphere-parameters)\n  - [Instructions](#instructions)\n    - [Host preparation](#host-preparation)\n  - [Usage](#usage)\n  - [Adding a new e2e flow](#adding-a-new-e2e-flow)\n  - [Full flow cases](#full-flow-cases)\n    - [Run full flow with install](#run-full-flow-with-install)\n    - [Run full flow without install](#run-full-flow-without-install)\n    - [Run base flow without configuring networking](#run-base-flow-without-configuring-networking)\n    - [Run full flow with ipv6](#run-full-flow-with-ipv6)\n    - [Redeploy nodes](#redeploy-nodes)\n    - [Cleaning](#cleaning)\n      - [Clean all include minikube](#clean-all-include-minikube)\n      - [Clean nodes only](#clean-nodes-only)\n      - [Delete all virsh resources](#delete-all-virsh-resources)\n    - [Create cluster and download ISO](#create-cluster-and-download-iso)\n    - [Deploy Assisted Service and Monitoring stack](#deploy-assisted-service-and-monitoring-stack)\n    - [`deploy_assisted_service` and Create cluster and download ISO](#deploy_assisted_service-and-create-cluster-and-download-iso)\n    - [start\\_minikube and Deploy UI and open port forwarding on port 6008, allows to connect to it from browser](#start_minikube-and-deploy-ui-and-open-port-forwarding-on-port-6008-allows-to-connect-to-it-from-browser)\n    - [Kill all open port forwarding commands, will be part of destroy target](#kill-all-open-port-forwarding-commands-will-be-part-of-destroy-target)\n  - [Test `assisted-service` image](#test-assisted-service-image)\n    - [Test agent image](#test-agent-image)\n    - [Test installer image or controller image](#test-installer-image-or-controller-image)\n  - [Test installer, controller, `assisted-service` and agent images in the same flow](#test-installer-controller-assisted-service-and-agent-images-in-the-same-flow)\n    - [Test infra image](#test-infra-image)\n  - [In case you would like to build the image with a different `assisted-service` client](#in-case-you-would-like-to-build-the-image-with-a-different-assisted-service-client)\n  - [Test with RHSSO Authentication](#test-with-rhsso-authentication)\n  - [Single Node - Bootstrap in place with Assisted Service](#single-node---bootstrap-in-place-with-assisted-service)\n  - [Single Node - Bootstrap in place with Assisted Service and IPv6](#single-node---bootstrap-in-place-with-assisted-service-and-ipv6)\n  - [Kind](#kind)\n  - [On-prem](#on-prem)\n  - [Run operator](#run-operator)\n  - [Cluster-API-provider-agent](#cluster-api-provider-agent)\n  - [Test iPXE boot flow](#test-ipxe-boot-flow)\n\n## Prerequisites\n\n- CentOS 8 / RHEL 8 / Rocky 8 / AlmaLinux 8 host\n- File system that supports d_type\n- Ideally on a bare metal host with at least 64G of RAM.\n- Run as a user with password-less `sudo` access or be ready to enter `sudo` password for prepare phase.\n- Make sure to unset the KUBECONFIG variable in the same shell where you run `make`.\n- Get a valid pull secret (JSON string) from [redhat.com](https://console.redhat.com/openshift/install/pull-secret) if you want to test the installation (not needed for testing only the discovery flow). Export it as:\n\n```bash\nexport PULL_SECRET='\u003cpull secret JSON\u003e'\n# or alternatively, define PULL_SECRET_FILE=\"/path/to/pull/secret/file\"\n```\n\n## Installation Guide\n\nCheck the [Installation Guide](GUIDE.md) for installation instructions.\n\n## Deployment parameters\n\n### Components\n\n|     |     |\n| --- | --- |\n| `AGENT_DOCKER_IMAGE`          | agent docker image to use, will update assisted-service config map with given value |\n| `INSTALLER_IMAGE`             | assisted-installer image to use, will update assisted-service config map with given value |\n| `SERVICE`                     | assisted-service image to use |\n| `SERVICE_BRANCH`              | assisted-service branch to use, default: master |\n| `SERVICE_BASE_REF`            | assisted-service base reference to merge `SERVICE_BRANCH` with, default: master |\n| `SERVICE_REPO`                | assisted-service repository to use, default: https://github.com/openshift/assisted-service |\n| `USE_LOCAL_SERVICE`           | if equals `true`, assisted-service will be build from `assisted-test-infra/assisted-service` code |\n| `DEBUG_SERVICE`               | if equals `true`, assisted-service will be build from `assisted-test-infra/assisted-service` code and deployed in debug mode, exposing port `40000` for `dlv` connection. |\n| `LOAD_BALANCER_TYPE` | Set to `cluster-managed` if the load-balancer will be deployed by OpenShift, and `user-managed` if it will be deployed externally by the user. |\n\n**Note** - When using `USE_LOCAL_SERVICE` or `DEBUG_SERVICE` local assisted-service code will be used. Therefore `bring_assisted_service.sh` script will not change the local service code unless it is missing. If you want to import assisted-service changes, you can use -\n```bash\nmake bring_assisted_service SERVICE_REPO=\u003cassisted-service repository to use\u003e SERVICE_BASE_REF=\u003cassisted-service branch to use\u003e\n```\nbefore you run start the deployment.\n\n### Deployment config\n\n|     |     |\n| --- | --- |\n| `ASSISTED_SERVICE_HOST`              | FQDN or IP address to where assisted-service is deployed. Used when DEPLOY_TARGET=\"onprem\". |\n| `DEPLOY_MANIFEST_PATH`               | the location of a manifest file that defines image tags images to be used |\n| `DEPLOY_MANIFEST_TAG`                | the Git tag of a manifest file that defines image tags to be used |\n| `DEPLOY_TAG`                         | the tag to be used for all images (assisted-service, assisted-installer, agent, etc) this will override any other os parameters |\n| `DEPLOY_TARGET`                      | Specifies where assisted-service will be deployed. Defaults to \"minikube\". Other options are \"onprem\" for installing as a podman pod and \"kind\". |\n| `KUBECONFIG`                         | kubeconfig file path, default: \u003chome\u003e/.kube/config |\n| `SERVICE_NAME`                       | assisted-service target service name, default: assisted-service |\n| `OPENSHIFT_VERSION`                  | The OCP version which will be supported by the deployed components. Should be in `x.y` format |\n| `OPENSHIFT_INSTALL_RELEASE_IMAGE`    | The OCP release image reference which will be supported by the deployed components. For example - `quay.io/openshift-release-dev/ocp-release:4.16.0-x86_64` |\n| `INSTALL_WORKING_DIR`                | The path to a working directory where files like iPXE scripts, boot artefacts, etc are strored. For example `/tmp` |\n| `MACHINE_CIDR_IPV4`                  | The machine cidr for e.g. remote libvirt. Default is `192.168.127.0/24` |\n| `MACHINE_CIDR_IPV6`                  | The machine cidr for e.g. remote libvirt. Default is `1001:db9::/120` |\n| `USE_DHCP_FOR_LIBVIRT`               | Use DHCP for libvirt on s390x architecture. If set to true, the `MAC_LIBVIRT_PREFIX` parameter must be specified. Default is `True`.  \n| `MAC_LIBVIRT_PREFIX`                 | The mac used for DHCP for KVM. Example `54:52:00:00:7a:00`. The last two diggest will be increased for every node. The first node will get `54:52:00:00:7a:00` the second node will get `54:52:00:00:7a:01`\n\n### Minikube configuration\n\n|   |   |\n|---|---|\n| `MINIKUBE_DRIVER`| set minikube driver, default = kvm2 |\n| `MINIKUBE_CPUS`| set amount of cpus, default = 4|\n| `MINIKUBE_MEMORY`| set amount of memory, default = 8G|\n| `MINIKUBE_DISK_SIZE`| set disk size, default = 50G |\n| `MINIKUBE_HOME`| set default location for minikube, default = ~/.minikube |\n| `MINIKUBE_REGISTRY_IMAGE`| set registry image, default = \"quay.io/libpod/registry:2.8\" |\n### Cluster configmap\n\n|     |     |\n| --- | --- |\n| `BASE_DNS_DOMAINS`                      | base DNS domains that are managed by assisted-service, format: domain_name:domain_id/provider_type. |\n| `AUTH_TYPE`                             | configure the type of authentication assisted-service will use, default: none |\n| `IPv4`                                  | Boolean value indicating if IPv4 is enabled. Default is yes |\n| `IPv6`                                  | Boolean value indicating if IPv6 is enabled. Default is no |\n| `STATIC_IPS`                            | Boolean value indicating if static networking should be enabled. Default is no |\n| `IS_BONDED`                             | Boolean value indicating if bonding should be enabled. It also implies static networking. Default is no |\n| `NUM_BONDED_SLAVES`                     | Integer value indicating the number of bonded slaves per bond. It is only used if bonding support is enabled. Default is 2 |\n| `BONDING_MODE`                          | Bonding mode when bonding is in use. Default is active-backup |\n| `OCM_BASE_URL`                          | OCM API URL used to communicate with OCM and AMS, default: https://api.integration.openshift.com/ |\n| `OCM_CLIENT_ID`                         | ID of Service Account used to communicate with OCM and AMS for Agent Auth and Authz |\n| `OCM_CLIENT_SECRET`                     | Password of Service Account used to communicate with OCM and AMS for Agent Auth and Authz |\n| `JWKS_URL`                              | URL for retrieving the JSON Web Key Set (JWKS) used for verifying JWT tokens in authentication. Defaults to https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/certs |\n| `OC_MODE`                               | if set, use oc instead of minikube |\n| `OC_SCHEME`                             | Scheme for assisted-service url on oc, default: http |\n| `OC_SERVER`                             | server for oc login, required if oc-token is provided, default: https://api.ocp.prod.psi.redhat.com:6443 |\n| `OC_TOKEN`                              | token for oc login (an alternative for oc-user \u0026 oc-pass) |\n| `OCM_SELF_TOKEN`                        | offline token token used to fetch JWT tokens for assisted-service authentication (from https://console.redhat.com/openshift/token)\n| `ACKNOWLEDGE_DEPRECATED_OCM_SELF_TOKEN` | flag indicates acknowledgement of offline token deprecation when used. should be `yes` when `OCM_SELF_TOKEN` is used\n| `PROXY`                                 | Set HTTP and HTTPS proxy with default proxy targets. The target is the default gateway in the network having the machine network CIDR |\n| `SERVICE_BASE_URL`                      | update assisted-service config map SERVICE_BASE_URL parameter with given URL, including port and protocol |\n| `PUBLIC_CONTAINER_REGISTRIES`           | comma-separated list of registries that do not require authentication for pulling assisted installer images |\n| `ENABLE_KUBE_API`                       | If set, deploy assisted-service with Kube API controllers (minikube only) |\n| `DISABLED_HOST_VALIDATIONS`             | comma-separated list of validation IDs to be excluded from the host validation process. |\n| `SSO_URL`                               | URL used to fetch JWT tokens for assisted-service authentication |\n| `CHECK_CLUSTER_VERSION`                 | If \"True\", the controller will wait for CVO to finish |\n| `AGENT_TIMEOUT_START`                   | Update assisted-service config map AGENT_TIMEOUT_START parameter. Default is 3m.\n| `OS_IMAGES`                             | A list of available OS images (one for each minor OCP version and CPU architecture) |\n| `RELEASE_IMAGES`                        | A list of available release images (one for each minor OCP version and CPU architecture) |\n| `NVIDIA_REQUIRE_GPU`                    | Boolean value indicating if NVIDIA GPU requirements should be enforced, default: `true` |\n| `AMD_REQUIRE_GPU`                       | Boolean value indicating if AMD GPU requirements should be enforced, default: `true` |\n| `TNA_CLUSTERS_SUPPORT`                  | Boolean value indicating if TNA clusters should be supported, default: `false` |\n\n## Installation parameters\n\n|                              |                                                                                                                                                            |\n|------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `BASE_DOMAIN`                | base domain, needed for DNS name, default: redhat.com                                                                                                      |\n| `CLUSTER_ID`                 | cluster id, used for already existing cluster, e.g. after the deploy_nodes command                                                                         |\n| `CLUSTER_NAME`               | cluster name, used as prefix for virsh resources, default: test-infra-cluster                                                                              |\n| `HTTPS_PROXY_URL`            | A proxy URL to use for creating HTTPS connections outside the cluster                                                                                      |\n| `HTTP_PROXY_URL`             | A proxy URL to use for creating HTTP connections outside the cluster                                                                                       |\n| `ISO`                        | path to ISO to spawn VM with, if set vms will be spawn with this iso without creating cluster. File must have the '.iso' suffix                            |\n| `MASTER_MEMORY`              | memory for master VM, default: 16384MB                                                                                                                     |\n| `NETWORK_CIDR`               | network CIDR to use for virsh VM network, default: \"192.168.126.0/24\"                                                                                      |\n| `NETWORK_NAME`               | virsh network name for VMs creation, default: test-infra-net                                                                                               |\n| `NO_PROXY_VALUES`            | A comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying                                      |\n| `NUM_MASTERS`                | number of VMs to spawn as masters, default: 3                                                                                                              |\n| `NUM_WORKERS`                | number of VMs to spawn as workers, default: 0                                                                                                              |\n| `NUM_ARBITERS`               | number of VMs to spawn as arbiters, default: 0                                                                                                             |\n| `OPENSHIFT_VERSION`          | OpenShift version to install, default taken from the deployed assisted-service (`/v2/openshift-versions`)                                                  |\n| `HYPERTHREADING`             | Set node's CPU hyperthreading mode. Values are: all, none, masters, workers. default: all                                                                  |\n| `DISK_ENCRYPTION_MODE`       | Set disk encryption mode. Right now assisted-test-infra only supports \"tpmv2\", which is also the default.                                                  |\n| `DISK_ENCRYPTION_ROLES`      | Set node roles to apply disk encryption. Values are: all, none, masters, workers. default: none                                                            |\n| `PULL_SECRET`                | pull secret to use for cluster installation command, no option to install cluster without it.                                                              |\n| `PULL_SECRET_FILE`           | path and name to the file containing the pull secret to use for cluster installation command, no option to install cluster without it.                     |\n| `REMOTE_SERVICE_URL`         | URL to remote assisted-service - run infra on existing deployment                                                                                          |\n| `ROUTE53_SECRET`             | Amazon Route 53 secret to use for DNS domains registration.                                                                                                |\n| `WORKER_MEMORY`              | memory for worker VM, default: 8892MB                                                                                                                      |\n| `SSH_PUB_KEY`                | SSH public key to use for image generation, gives option to SSH to VMs, default: ~/.ssh/id_rsa.pub                                                         |\n| `IPXE_BOOT`                  | Boots VMs using iPXE if set to `true`, default: `false`                                                                                                    |\n| `PLATFORM`                   | The openshift platform to integrate with, one of: `baremetal`, `none`,`vsphere`, `external`, default: `baremetal`                                          |\n| `KERNEL_ARGUMENTS`           | Update live ISO kernel arguments. JSON formatted string containing array of dictionaries each having 2 attributes: `operation` and `value`. Currently, only `append` operation is supported. |\n| `CPU_ARCHITECTURE`           | CPU architecture of the nodes that will be part of the cluster, one of: `x86_64`, `arm64`, `s390x`, `ppc64le`, default: `x86_64`                           |\n| `DAY2_CPU_ARCHITECTURE`      | CPU architecture of the nodes that will be part of the cluster in day2, one of: `x86_64`, `arm64`, `s390x`, `ppc64le` default:`x86_64`                     |\n| `CUSTOM_MANIFESTS_FILES`     | List of local manifest files separated by commas or path to directory containing multiple manifests                                                        |\n| `DISCONNECTED`               | Set to \"true\" if local mirror needs to be used                                                            |\n| `REGISTRY_CA_PATH`           | Path to mirror registry CA bundle                                                            |\n| `HOST_INSTALLER_ARGS`        | JSON formatted string used to customize installer arguments on all the hosts. Example: `{\"args\": [\"--append-karg\", \"console=ttyS0\"]}`                      |\n| `LOAD_BALANCER_TYPE` | Set to `cluster-managed` if the load-balancer will be deployed by OpenShift, and `user-managed` if it will be deployed externally by the user. |\n| `SET_INFRAENV_VERSION` | If `true`, sets the `osImageVersion` field on the `InfraEnv` to the `OPENSHIFT_VERSION` to ensure the discovery ISO uses this OCP version for tests, default: `false` |\n| `OLM_OPERATORS` | Comma-separated list of OLM operators to install on the cluster (e.g., `mce,odf,metallb`) |\n| `OLM_BUNDLES` | Comma-separated list of operator bundles to install on the cluster (e.g., `virtualization,openshift-ai`). Bundles are expanded to their constituent operators automatically | \n\n## Vsphere parameters\n\n|     |     |\n| --- | --- |\n| `VSPHERE_CLUSTER`                 | vSphere cluster name, vsphere cluster is a cluster of hosts that it manages, mandatory for vsphere platform |\n| `VSPHERE_VCENTER`                 | vSphere vcenter server ip address or fqdn (vCenter server name for vSphere API operations), mandatory for vsphere platform |\n| `VSPHERE_DATACENTER`              | vSphere data center name, mandatory for vsphere platform |\n| `VSPHERE_NETWORK`                 | vSphere publicly accessible network for cluster ingress and access. e.g VM Network, mandatory for vsphere platform |\n| `VSPHERE_DATASTORE`               | vSphere data store name, mandatory for vsphere platform |\n| `VSPHERE_USERNAME`                | vSphere vcenter server username, mandatory for vsphere platform |\n| `VSPHERE_PASSWORD`                | vSphere vcenter server password, mandatory for vsphere platform |\n\n## Redfish parameters\n\n|     |     |\n| --- | --- |\n`REDFISH_ENABLED`                   | Redfish enable API for management hardware servers |\n`REDFISH_USER`                      | Redfish remote management user |\n`REDFISH_PASSWORD`                  | Redfish remote management password |\n`REDFISH_MACHINES`                  | Redfish list of remote ipv4 managnments |\n\n## External parameters\n\n|     |     |\n| --- | --- |\n| `EXTERNAL_PLATFORM_NAME`            | Plaform name when using `external` platform                                                                                                         |\n| `EXTERNAL_CLOUD_CONTROLLER_MANAGER` | Cloud controller manager when using `external` platform                                                                                             |\n\n## Instructions\n\n### Host preparation\n\nOn the bare metal host:\n\n**Note**: don't do it from /root folder - it will break build image mounts and fail to run\n\n```bash\ndnf install -y git make\ncd /home/test\ngit clone https://github.com/openshift/assisted-test-infra.git\n```\n\nWhen using this infra for the first time on a host, run:\n\n```bash\nmake setup\n```\n\nThis will install required packages, configure libvirt, pull relevant Docker images, and start Minikube.\n\n## Usage\n\nThere are different options to use test-infra, which can be found in the makefile.\n\n## Adding a new e2e flow\n\nDocumentation about guidelines on how to create a new e2e test can be found [here](GUIDE.md#adding-a-new-e2e-flow)\n\n## Full flow cases\n\nThe following is a list of stages that will be run:\n\n1. Start Minikube if not started yet\n1. Deploy services for assisted deployment on Minikube\n1. Create cluster in `assisted-service` service\n1. Download ISO image\n1. Spawn required number of VMs from downloaded ISO with parameters that can be configured by OS environment (check makefile)\n1. Wait until nodes are up and registered in `assisted-service`\n1. Set nodes roles in `assisted-service` by matching VM names (worker/master)\n1. Verify all nodes have required hardware to start installation\n1. Install nodes\n1. Download `kubeconfig-noingress` to build/kubeconfig\n1. Waiting till nodes are in `installed` state, while verifying that they don't move to `error` state\n1. Verifying cluster is in state `installed`\n1. Download kubeconfig to build/kubeconfig\n\n**Note**: Please make sure no previous cluster is running before running a new one (it will rewrite its build files).\n\n### Run full flow with install\n\nTo run the full flow, including installation:\n\n```bash\nmake run deploy_nodes_with_install\n```\n\nOr to run it together with `setup` (requires `sudo` password):\n\n```bash\nmake all\n```\n\n### Run full flow without install\n\nTo run the flow without the installation stage:\n\n```bash\nmake run deploy_nodes_with_networking\n```\n\n### Run base flow without configuring networking\n\nDeploy the nodes without the network configuration and without the installation stage:\n\n```bash\nmake run deploy_nodes\n```\n\n### Run full flow with ipv6\n\nTo run the flow with default IPv6 settings:\n\n```bash\nmake deploy_nodes_with_install IPv4=no IPv6=yes\n```\n\n### Redeploy nodes\n\n```bash\nmake redeploy_nodes\n```\n\nOr:\n\n```bash\nmake redeploy_nodes_with_install\n```\n\n### Cleaning\n\nFollowing sections show how to perform cleaning of test-infra environment.\n\n#### Clean all include minikube\n\n```bash\nmake destroy\n```\n\n#### Clean nodes only\n\n```bash\nmake destroy_nodes\n```\n\n#### Delete all virsh resources\n\nSometimes you may need to delete all libvirt resources\n\n```bash\nmake delete_all_virsh_resources\n```\n\n### Create cluster and download ISO\n\n```bash\nmake download_iso\n```\n\n### Deploy Assisted Service and Monitoring stack\n\n```bash\nmake run\nmake deploy_monitoring\n```\n\n### `deploy_assisted_service` and Create cluster and download ISO\n\n```bash\nmake download_iso_for_remote_use\n```\n\n### start_minikube and Deploy UI and open port forwarding on port 6008, allows to connect to it from browser\n\n```bash\nmake deploy_ui\n```\n\n### Kill all open port forwarding commands, will be part of destroy target\n\n```bash\nmake kill_all_port_forwardings\n```\n\n## Test `assisted-service` image\n\n```bash\nmake destroy run SERVICE=\u003cimage to test\u003e\n```\n\n### Test agent image\n\n```bash\nmake destroy run AGENT_DOCKER_IMAGE=\u003cimage to test\u003e\n```\n\n### Test installer image or controller image\n\n```bash\nmake destroy run INSTALLER_IMAGE=\u003cimage to test\u003e CONTROLLER_IMAGE=\u003cimage to test\u003e\n```\n\n## Test installer, controller, `assisted-service` and agent images in the same flow\n\n```bash\nmake destroy run INSTALLER_IMAGE=\u003cimage to test\u003e AGENT_DOCKER_IMAGE=\u003cimage to test\u003e SERVICE=\u003cimage to test\u003e\n```\n\n### Test infra image\n\nAssisted-test-infra builds an image including all the prerequisites to handle this repository.\n\n```bash\nmake image_build\n```\n\n## In case you would like to build the image with a different `assisted-service` client\n\n```bash\nmake image_build SERVICE_REPO=\u003cassisted-service repository to use\u003e SERVICE_BASE_REF=\u003cassisted-service branch to use\u003e\n```\n\n## Test with RHSSO Authentication\n\nTo test with Authentication, the following additional environment variables are required:\n\n```\nexport AUTH_TYPE=rhsso\nexport OCM_BASE_URL=https://api.openshift.com\nexport JWKS_URL=https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/certs\n```\n\nThere are currently two ways to authentication:\n  1. Using service account - The service account need to have the necessary roles in order to make requests to OCM to check users roles/capabilities\n  ```\n  export OCM_CLIENT_ID=\u003cSSO Service Account Name\u003e\n  export OCM_CLIENT_SECRET=\u003cSSO Service Account Password\u003e\n  ```\n  2. Using offline token (deprecated soon)\n  ```\n  export OCM_SELF_TOKEN=\u003cUser token from https://console.redhat.com/openshift/token\u003e\n  export ACKNOWLEDGE_DEPRECATED_OCM_SELF_TOKEN=yes\n  ```\n\n- UI is not available when Authentication is enabled.\n- The PULL_SECRET variable should be taken from the same Red Hat cloud environment as defined in OCM_URL (integration, stage or production).\n\n## Single Node - Bootstrap in place with Assisted Service\n\nTo test single node bootstrap in place flow with assisted service\n\n```\nexport PULL_SECRET='\u003cpull secret JSON\u003e'\nexport OPENSHIFT_INSTALL_RELEASE_IMAGE=\u003crelevant release image if needed\u003e\nexport NUM_MASTERS=1\nmake deploy_nodes_with_install\n```\n\nSet BIP_BUTANE_CONFIG env var to the path with butane config to be merged with bootstrap. Might be useful for promtail logging / other debug tasks\n\n## Single Node - Bootstrap in place with Assisted Service and IPv6\n\nTo test single node bootstrap in place flow with assisted service and ipv6\n\n```\nexport PULL_SECRET='\u003cpull secret JSON\u003e'\nexport OPENSHIFT_INSTALL_RELEASE_IMAGE=\u003crelevant release image if needed\u003e\nexport NUM_MASTERS=1\nmake deploy_nodes_with_install IPv6=yes IPv4=no\n```\n\n## Kind\n\nSet ``DEPLOY_TARGET=kind`` to have a full construction of assisted-installer on top\nof a kubernetes cluster which is running as one podman container:\n\n```\n# currently it's advisable to set it throughout the entire testing session because\n# tests are also using this env-var to understand the networking layout\nexport DEPLOY_TARGET=kind\n\nmake run deploy_nodes_with_install\n```\n\nYou can also create the kind cluster just by doing:\n```\nmake create_hub_cluster DEPLOY_TARGET=kind\n```\n\nOn ``kind`` mode you should be able to access the UI / API via ``http://\u003chost\u003e/``.\n\n## On-prem\n\nTo test on-prem in the e2e flow, two additional environment variables need to be set:\n\n```\nexport DEPLOY_TARGET=onprem\nexport ASSISTED_SERVICE_HOST=\u003cfqdn-or-ip\u003e\n```\n\nSetting DEPLOY_TARGET to \"onprem\" configures assisted-test-infra to deploy\nthe assisted-service using a pod on your local host.\n\nASSISTED_SERVICE_HOST defines where the assisted-service will be deployed. For \"onprem\" deployments, set it to the FQDN or IP address of the host.\n\nOptionally, you can also provide OPENSHIFT_INSTALL_RELEASE_IMAGE and PUBLIC_CONTAINER_REGISTRIES:\n\n```\nexport OPENSHIFT_INSTALL_RELEASE_IMAGE=quay.io/openshift-release-dev/ocp-release:4.7.0-x86_64\nexport PUBLIC_CONTAINER_REGISTRIES=quay.io\n```\n\nIf you do not export the optional variables, it will run with the default specified in assisted-service/onprem-environment.\n\nThen run the same commands described in the instructions above to execute the test.\n\nTo run the full flow:\n\n```\nmake all\n```\n\nTo cleanup after the full flow:\n\n```\nmake destroy\n```\n\n## Run operator\n\nThe current implementation installs an OCP cluster using assisted service on minikube.\nAfterwards, we install the assisted-service-operator on top of that cluster.\nThe first step would be removed once we could either:\n\n- Have an OCP cluster easily (i.e. [CRC](https://developers.redhat.com/products/codeready-containers/overview))\n- Install the assisted-service operator on top of pure-k8s cluster. (At the moment there are some OCP component prerequisites)\n\n```bash\n# Deploy AI\nmake run deploy_nodes_with_install\n\n# Deploy AI Operator on top of the new cluster\nexport KUBECONFIG=./build/kubeconfig\nmake deploy_assisted_operator\n```\n\nClear the operator deployment\n\n```bash\nmake clear_operator\n```\n\nRun installation with the operator\n\n```bash\nexport INSTALLER_KUBECONFIG=./build/kubeconfig\nexport TEST_FUNC=test_kube_api_ipv4\nexport TEST=./src/tests/test_kube_api.py\nexport TEST_TEARDOWN=false\nmake test\n```\n\n## Cluster-API-provider-agent\nTo test capi-provider e2e flow, few additional environment variables need to be set:\nthese environment variables result a bigger minikube instance required for this flow\n```bash\n# The following exports are required since the capi test flow requires more resources than the default minikube deployment provides\nexport MINIKUBE_HOME=/home\nexport MINIKUBE_DISK_SIZE=100g\nexport MINIKUBE_RAM_MB=12288\n```\nSetup minikube with assisted-installer (kube-api enabled):\n```bash\nexport PULL_SECRET=\u003cyour pull secret\u003e\nENABLE_KUBE_API=true make run\n```\n\nDeploy capi-provider-agent and hypershift:\n```bash\nmake deploy_capi_env\n```\nRun the test:\n```bash\nENABLE_KUBE_API=true make test TEST=./src/tests/test_kube_api.py TEST_FUNC=test_capi_provider KUBECONFIG=$HOME/.kube/config\n```\n\n## Test iPXE boot flow\nTo test e2e deploying and installing nodes using iPXE, run the following:\n```bash\nexport IPXE_BOOT=true\nmake setup\nmake run\nmake deploy_nodes_with_install\n```\n\nOptional environment variables that may be set for this test\n|     |     |\n| --- | --- |\n| `IPXE_BOOT` | Boots VM hosts using iPXE if set to `true`, default: `false`|\n\n**Notes**:\n* A containerized Python server will be used to host the iPXE scripts for each cluster. This is due to the URL of the iPXE script file hosted in the assisted-service is longer than the character limit allowed in libvirt.\n\n## Test MCE and storage\n\nTo test MCE deployed correctly with a storage driver, we should run the following:\n```bash\nexport OLM_OPERATORS=mce,odf\nexport NUM_WORKERS=3\nexport WORKER_MEMORY=50000\nexport WORKER_CPU=20\nexport TEST_FUNC=test_mce_storage_post\nmake setup\nmake run\nmake deploy_nodes_with_install\nexport KUBECONFIG=./build/kubeconfig\nexport KUBECONFIG=$(find ${KUBECONFIG} -type f)\nmake test_parallel\n```\n\nexport OLM_OPERATORS=mce,odf\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fopenshift%2Fassisted-test-infra","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fopenshift%2Fassisted-test-infra","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fopenshift%2Fassisted-test-infra/lists"}