Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/terraform-ibm-modules/terraform-ibm-ocp-all-inclusive
A wrapper module around several other modules, which sets up an OpenShift cluster
https://github.com/terraform-ibm-modules/terraform-ibm-ocp-all-inclusive
core-team graduated ibm-cloud ocp ocp-all-inclusive openshift-cluster supported terraform terraform-module vpc-cluster
Last synced: 2 months ago
JSON representation
A wrapper module around several other modules, which sets up an OpenShift cluster
- Host: GitHub
- URL: https://github.com/terraform-ibm-modules/terraform-ibm-ocp-all-inclusive
- Owner: terraform-ibm-modules
- License: apache-2.0
- Created: 2023-02-21T09:03:24.000Z (almost 2 years ago)
- Default Branch: main
- Last Pushed: 2024-09-27T15:15:53.000Z (3 months ago)
- Last Synced: 2024-09-27T19:21:45.578Z (3 months ago)
- Topics: core-team, graduated, ibm-cloud, ocp, ocp-all-inclusive, openshift-cluster, supported, terraform, terraform-module, vpc-cluster
- Language: HCL
- Size: 546 KB
- Stars: 1
- Watchers: 17
- Forks: 2
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- License: LICENSE
- Codeowners: .github/CODEOWNERS
Awesome Lists containing this project
README
# Red Hat OCP (OpenShift Container Platform) All Inclusive Module
[![Graduated (Supported)](https://img.shields.io/badge/Status-Graduated%20(Supported)-brightgreen)](https://terraform-ibm-modules.github.io/documentation/#/badge-status)
[![Build status](https://github.com/terraform-ibm-modules/terraform-ibm-ocp-all-inclusive/actions/workflows/ci.yml/badge.svg)](https://github.com/terraform-ibm-modules/terraform-ibm-ocp-all-inclusive/actions/workflows/ci.yml)
[![pre-commit](https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit&logoColor=white)](https://github.com/pre-commit/pre-commit)
[![latest release](https://img.shields.io/github/v/release/terraform-ibm-modules/terraform-ibm-ocp-all-inclusive?logo=GitHub&sort=semver)](https://github.com/terraform-ibm-modules/terraform-ibm-ocp-all-inclusive/releases/latest)
[![Renovate enabled](https://img.shields.io/badge/renovate-enabled-brightgreen.svg)](https://renovatebot.com/)
[![semantic-release](https://img.shields.io/badge/%20%20%F0%9F%93%A6%F0%9F%9A%80-semantic--release-e10079.svg)](https://github.com/semantic-release/semantic-release)This module is a wrapper module that groups the following modules:
- [base-ocp-vpc-module](https://github.com/terraform-ibm-modules/terraform-ibm-base-ocp-vpc) - Provisions a base (bare) Red Hat OpenShift Container Platform cluster on VPC Gen2 (supports passing Key Protect details to encrypt cluster).
- [observability-agents-module](https://github.com/terraform-ibm-modules/terraform-ibm-observability-agents) - Deploys Log Analysis and Cloud Monitoring agents to a cluster.:exclamation: **Important:** You can't update Red Hat OpenShift cluster nodes by using this module. The Terraform logic ignores updates to prevent possible destructive changes.
## Before you begin
- Make sure that you have a recent version of the [IBM Cloud CLI](https://cloud.ibm.com/docs/cli?topic=cli-getting-started)
- Make sure that you have a recent version of the [IBM Cloud Kubernetes service CLI](https://cloud.ibm.com/docs/containers?topic=containers-kubernetes-service-cli)### Default Worker Pool management
You can manage the default worker pool using Terraform, and make changes to it through this module. This option is enabled by default. Under the hood, the default worker pool is imported as a `ibm_container_vpc_worker_pool` resource. Advanced users may opt-out of this option by setting `import_default_worker_pool_on_create` parameter to `false`. For most use cases it is recommended to keep this variable to `true`.
#### Important Considerations for Terraform and Default Worker Pool
**Terraform Destroy**
When using the default behavior of handling the default worker pool as a stand-alone `ibm_container_vpc_worker_pool`, you must manually remove the default worker pool from the Terraform state before running a terraform destroy command on the module. This is due to a [known limitation](https://cloud.ibm.com/docs/containers?topic=containers-faqs#smallest_cluster) in IBM Cloud.
Terraform CLI Example
For a cluster with 2 worker pools, named 'default' and 'secondarypool', follow these steps:
```sh
$ terraform state list | grep ibm_container_vpc_worker_pool
> module.ocp_all_inclusive.module.ocp_base.data.ibm_container_vpc_worker_pool.all_pools["default"]
> module.ocp_all_inclusive.module.ocp_base.data.ibm_container_vpc_worker_pool.all_pools["secondarypool"]
> module.ocp_all_inclusive.module.ocp_base.ibm_container_vpc_worker_pool.pool["default"]
> module.ocp_all_inclusive.module.ocp_base.ibm_container_vpc_worker_pool.pool["secondarypool"]
> ...$ terraform state rm "module.ocp_all_inclusive.module.ocp_base.ibm_container_vpc_worker_pool.pool[\"default\"]"
```Schematics Example: For a cluster with 2 worker pools, named 'default' and 'secondarypool', follow these steps:
```sh
$ ibmcloud schematics workspace state rm --id --address "module.ocp_all_inclusive.module.ocp_base.ibm_container_vpc_worker_pool.pool[\"default\"]"
```**Changes Requiring Re-creation of Default Worker Pool**
If you need to make changes to the default worker pool that require its re-creation (e.g., changing the worker node `operating_system`), you must set the `allow_default_worker_pool_replacement` variable to true, perform the apply, and then set it back to false in the code before the subsequent apply. This is **only** necessary for changes that require the recreation the entire default pool and is **not needed for scenarios that does not require recreating the worker pool such as changing the number of workers in the default worker pool**.
This approach is due to a limitation in the Terraform provider that may be lifted in the future.
## Overview
* [terraform-ibm-ocp-all-inclusive](#terraform-ibm-ocp-all-inclusive)
* [Examples](./examples)
* [Complete Example](./examples/end-to-end-example)
* [Contributing](#contributing)## terraform-ibm-ocp-all-inclusive
### Usage
```hcl
##############################################################################
# Required providers
##############################################################################provider "ibm" {
ibmcloud_api_key = "XXXXXXXXXX" # pragma: allowlist secret
region = "us-south"
}# data lookup required to initialse helm and kubernetes providers
data "ibm_container_cluster_config" "cluster_config" {
cluster_name_id = module.ocp_all_inclusive.cluster_id
}provider "helm" {
kubernetes {
host = data.ibm_container_cluster_config.cluster_config.host
token = data.ibm_container_cluster_config.cluster_config.token
}
}provider "kubernetes" {
host = data.ibm_container_cluster_config.cluster_config.host
token = data.ibm_container_cluster_config.cluster_config.token
}##############################################################################
# ocp-all-inclusive-module
##############################################################################module "ocp_all_inclusive" {
source = "terraform-ibm-modules/ocp-all-inclusive/ibm"
version = "latest" # Replace "latest" with a release version to lock into a specific release
ibmcloud_api_key = "XXXXXXXXXX" # pragma: allowlist secret
resource_group_id = "xxXXxxXXxXxXXXXxxXxxxXXXXxXXXXX"
region = "us-south"
cluster_name = "my-test-cluster"
cos_name = "my-cos-instance"
vpc_id = "xxXXxxXXxXxXXXXxxXxxxXXXXxXXXXX"
vpc_subnets = {
zone-1 = [
for zone in module.vpc.subnet_zone_list :
{
id = zone.id
zone = zone.zone
cidr_block = zone.cidr
}
]
}
log_analysis_instance_name = "my-logdna"
log_analysis_ingestion_key = "xxXXxxXXxXxXXXXxxXxxxXXXXxXXXXX"
cloud_monitoring_instance_name = "my-sysdig"
cloud_monitoring_access_key = "xxXXxxXXxXxXXXXxxXxxxXXXXxXXXXX"
}
```### Required IAM access policies
You need the following permissions to run this module.- Account Management
- **All Identity and Access Enabled** service
- `Viewer` platform access
- **All Resource Groups** service
- `Viewer` platform access
- IAM Services
- **Cloud Object Storage** service
- `Editor` platform access
- `Manager` service access
- **Kubernetes** service
- `Administrator` platform access
- `Manager` service access
- **VPC Infrastructure** service
- `Administrator` platform access
- `Manager` service access### Requirements
| Name | Version |
|------|---------|
| [terraform](#requirement\_terraform) | >= 1.3.0 |
| [external](#requirement\_external) | >= 2.2.3, < 3.0.0 |
| [helm](#requirement\_helm) | >= 2.8.0, < 3.0.0 |
| [ibm](#requirement\_ibm) | >= 1.66.0, < 2.0.0 |
| [kubernetes](#requirement\_kubernetes) | >= 2.16.1, < 3.0.0 |
| [local](#requirement\_local) | >= 2.2.3, < 3.0.0 |
| [null](#requirement\_null) | >= 3.2.1, < 4.0.0 |
| [time](#requirement\_time) | >= 0.9.1, < 1.0.0 |### Modules
| Name | Source | Version |
|------|--------|---------|
| [observability\_agents](#module\_observability\_agents) | terraform-ibm-modules/observability-agents/ibm | 1.29.1 |
| [ocp\_base](#module\_ocp\_base) | terraform-ibm-modules/base-ocp-vpc/ibm | 3.32.0 |### Resources
No resources.
### Inputs
| Name | Description | Type | Default | Required |
|------|-------------|------|---------|:--------:|
| [access\_tags](#input\_access\_tags) | Optional list of access management tags to add to the OCP Cluster created by this module. | `list(string)` | `[]` | no |
| [additional\_lb\_security\_group\_ids](#input\_additional\_lb\_security\_group\_ids) | Additional security group IDs to add to the load balancers associated with the cluster. These security groups are in addition to the IBM-maintained security group. | `list(string)` | `[]` | no |
| [additional\_vpe\_security\_group\_ids](#input\_additional\_vpe\_security\_group\_ids) | Additional security groups to add to all the load balancers. This comes in addition to the IBM maintained security group. |object({| `{}` | no |
master = optional(list(string), [])
registry = optional(list(string), [])
api = optional(list(string), [])
})
| [addons](#input\_addons) | List of all addons supported by the ocp cluster. |object({| `null` | no |
debug-tool = optional(string)
image-key-synchronizer = optional(string)
openshift-data-foundation = optional(string)
vpc-file-csi-driver = optional(string)
static-route = optional(string)
cluster-autoscaler = optional(string)
vpc-block-csi-driver = optional(string)
})
| [allow\_default\_worker\_pool\_replacement](#input\_allow\_default\_worker\_pool\_replacement) | (Advanced users) Set to true to allow the module to recreate a default worker pool. Only use in the case where you are getting an error indicating that the default worker pool cannot be replaced on apply. Once the default worker pool is handled as a stand-alone ibm\_container\_vpc\_worker\_pool, if you wish to make any change to the default worker pool which requires the re-creation of the default pool set this variable to true. | `bool` | `false` | no |
| [attach\_ibm\_managed\_security\_group](#input\_attach\_ibm\_managed\_security\_group) | Whether to attach the IBM-defined default security group (named `kube-`) to all worker nodes. Applies only if `custom_security_group_ids` is set. | `bool` | `true` | no |
| [cloud\_monitoring\_access\_key](#input\_cloud\_monitoring\_access\_key) | Access key for the Cloud Monitoring agent to communicate with the instance. | `string` | `null` | no |
| [cloud\_monitoring\_add\_cluster\_name](#input\_cloud\_monitoring\_add\_cluster\_name) | If true, configure the cloud monitoring agent to attach a tag containing the cluster name to all metric data. | `bool` | `true` | no |
| [cloud\_monitoring\_agent\_name](#input\_cloud\_monitoring\_agent\_name) | Cloud Monitoring agent name. Used for naming all kubernetes and helm resources on the cluster. | `string` | `"sysdig-agent"` | no |
| [cloud\_monitoring\_agent\_namespace](#input\_cloud\_monitoring\_agent\_namespace) | Namespace where to deploy the Cloud Monitoring agent. Default value is 'ibm-observe' | `string` | `"ibm-observe"` | no |
| [cloud\_monitoring\_agent\_tags](#input\_cloud\_monitoring\_agent\_tags) | List of tags to associate with the cloud monitoring agents | `list(string)` | `[]` | no |
| [cloud\_monitoring\_agent\_tolerations](#input\_cloud\_monitoring\_agent\_tolerations) | List of tolerations to apply to Cloud Monitoring agent. |list(object({|
key = optional(string)
operator = optional(string)
value = optional(string)
effect = optional(string)
tolerationSeconds = optional(number)
}))[| no |
{
"operator": "Exists"
},
{
"effect": "NoSchedule",
"key": "node-role.kubernetes.io/master",
"operator": "Exists"
}
]
| [cloud\_monitoring\_enabled](#input\_cloud\_monitoring\_enabled) | Deploy IBM Cloud Monitoring agent | `bool` | `true` | no |
| [cloud\_monitoring\_endpoint\_type](#input\_cloud\_monitoring\_endpoint\_type) | Specify the IBM Cloud Monitoring instance endpoint type (public or private) to use. Used to construct the ingestion endpoint. | `string` | `"private"` | no |
| [cloud\_monitoring\_instance\_region](#input\_cloud\_monitoring\_instance\_region) | The IBM Cloud Monitoring instance region. Used to construct the ingestion endpoint. | `string` | `null` | no |
| [cloud\_monitoring\_metrics\_filter](#input\_cloud\_monitoring\_metrics\_filter) | To filter custom metrics, specify the Cloud Monitoring metrics to include or to exclude. See https://cloud.ibm.com/docs/monitoring?topic=monitoring-change_kube_agent#change_kube_agent_inc_exc_metrics. |list(object({| `[]` | no |
type = string
name = string
}))
| [cloud\_monitoring\_secret\_name](#input\_cloud\_monitoring\_secret\_name) | The name of the secret which will store the access key. | `string` | `"sysdig-agent"` | no |
| [cluster\_config\_endpoint\_type](#input\_cluster\_config\_endpoint\_type) | Specify which type of endpoint to use for for cluster config access: 'default', 'private', 'vpe', 'link'. 'default' value will use the default endpoint of the cluster. | `string` | `"default"` | no |
| [cluster\_name](#input\_cluster\_name) | The name to give the OCP cluster provisioned by the module. | `string` | n/a | yes |
| [cluster\_ready\_when](#input\_cluster\_ready\_when) | The cluster is ready when one of the following: MasterNodeReady (not recommended), OneWorkerNodeReady, Normal, IngressReady | `string` | `"IngressReady"` | no |
| [cluster\_tags](#input\_cluster\_tags) | List of metadata labels to add to cluster. | `list(string)` | `[]` | no |
| [cos\_name](#input\_cos\_name) | Name of the COS instance to provision for OpenShift internal registry storage. New instance only provisioned if 'enable\_registry\_storage' is true and 'use\_existing\_cos' is false. Default: '\_cos' | `string` | `null` | no |
| [custom\_security\_group\_ids](#input\_custom\_security\_group\_ids) | Up to 4 additional security groups to add to all worker nodes. If `use_ibm_managed_security_group` is set to `true`, these security groups are in addition to the IBM-maintained security group. If additional groups are added, the default VPC security group is not assigned to the worker nodes. | `list(string)` | `null` | no |
| [disable\_outbound\_traffic\_protection](#input\_disable\_outbound\_traffic\_protection) | Whether to allow public outbound access from the cluster workers. This is only applicable for Red Hat OpenShift 4.15. | `bool` | `false` | no |
| [disable\_public\_endpoint](#input\_disable\_public\_endpoint) | Whether access to the public service endpoint is disabled when the cluster is created. Does not affect existing clusters. You can't disable a public endpoint on an existing cluster, so you can't convert a public cluster to a private cluster. To change a public endpoint to private, create another cluster with this input set to `true`. | `bool` | `false` | no |
| [enable\_registry\_storage](#input\_enable\_registry\_storage) | Set to `true` to enable IBM Cloud Object Storage for the Red Hat OpenShift internal image registry. Set to `false` only for new cluster deployments in an account that is allowlisted for this feature. | `bool` | `true` | no |
| [existing\_cos\_id](#input\_existing\_cos\_id) | The COS id of an already existing COS instance to use for OpenShift internal registry storage. Only required if 'enable\_registry\_storage' and 'use\_existing\_cos' are true | `string` | `null` | no |
| [existing\_kms\_instance\_guid](#input\_existing\_kms\_instance\_guid) | The GUID of an existing KMS instance which will be used for cluster encryption. If no value passed, cluster data is stored in the Kubernetes etcd, which ends up on the local disk of the Kubernetes master (not recommended). | `string` | `null` | no |
| [existing\_kms\_root\_key\_id](#input\_existing\_kms\_root\_key\_id) | The Key ID of a root key, existing in the KMS instance passed in var.existing\_kms\_instance\_guid, which will be used to encrypt the data encryption keys (DEKs) which are then used to encrypt the secrets in the cluster. Required if value passed for var.existing\_kms\_instance\_guid. | `string` | `null` | no |
| [force\_delete\_storage](#input\_force\_delete\_storage) | Delete attached storage when destroying the cluster - Default: false | `bool` | `false` | no |
| [ignore\_worker\_pool\_size\_changes](#input\_ignore\_worker\_pool\_size\_changes) | Enable if using worker autoscaling. Stops Terraform managing worker count | `bool` | `false` | no |
| [import\_default\_worker\_pool\_on\_create](#input\_import\_default\_worker\_pool\_on\_create) | (Advanced users) Whether to handle the default worker pool as a stand-alone ibm\_container\_vpc\_worker\_pool resource on cluster creation. Only set to false if you understand the implications of managing the default worker pool as part of the cluster resource. Set to true to import the default worker pool as a separate resource. Set to false to manage the default worker pool as part of the cluster resource. | `bool` | `true` | no |
| [kms\_account\_id](#input\_kms\_account\_id) | Id of the account that owns the KMS instance to encrypt the cluster. It is only required if the KMS instance is in another account. | `string` | `null` | no |
| [kms\_use\_private\_endpoint](#input\_kms\_use\_private\_endpoint) | Set as true to use the Private endpoint when communicating between cluster and KMS instance. | `bool` | `true` | no |
| [kms\_wait\_for\_apply](#input\_kms\_wait\_for\_apply) | Set true to make terraform wait until KMS is applied to master and it is ready and deployed. Default value is true. | `bool` | `true` | no |
| [log\_analysis\_add\_cluster\_name](#input\_log\_analysis\_add\_cluster\_name) | If true, configure the log analysis agent to attach a tag containing the cluster name to all log messages. | `bool` | `true` | no |
| [log\_analysis\_agent\_custom\_line\_exclusion](#input\_log\_analysis\_agent\_custom\_line\_exclusion) | Log Analysis agent custom configuration for line exclusion setting LOGDNA\_K8S\_METADATA\_LINE\_EXCLUSION. See https://github.com/logdna/logdna-agent-v2/blob/master/docs/KUBERNETES.md#configuration-for-kubernetes-metadata-filtering for more info. | `string` | `null` | no |
| [log\_analysis\_agent\_custom\_line\_inclusion](#input\_log\_analysis\_agent\_custom\_line\_inclusion) | Log Analysis agent custom configuration for line inclusion setting LOGDNA\_K8S\_METADATA\_LINE\_INCLUSION. See https://github.com/logdna/logdna-agent-v2/blob/master/docs/KUBERNETES.md#configuration-for-kubernetes-metadata-filtering for more info. | `string` | `null` | no |
| [log\_analysis\_agent\_name](#input\_log\_analysis\_agent\_name) | Log Analysis agent name. Used for naming all kubernetes and helm resources on the cluster. | `string` | `"logdna-agent"` | no |
| [log\_analysis\_agent\_namespace](#input\_log\_analysis\_agent\_namespace) | Namespace where to deploy the Log Analysis agent. Default value is 'ibm-observe' | `string` | `"ibm-observe"` | no |
| [log\_analysis\_agent\_tags](#input\_log\_analysis\_agent\_tags) | List of tags to associate with the log analysis agents | `list(string)` | `[]` | no |
| [log\_analysis\_agent\_tolerations](#input\_log\_analysis\_agent\_tolerations) | List of tolerations to apply to Log Analysis agent. |list(object({|
key = optional(string)
operator = optional(string)
value = optional(string)
effect = optional(string)
tolerationSeconds = optional(number)
}))[| no |
{
"operator": "Exists"
}
]
| [log\_analysis\_enabled](#input\_log\_analysis\_enabled) | Deploy IBM Cloud Logging agent | `bool` | `true` | no |
| [log\_analysis\_endpoint\_type](#input\_log\_analysis\_endpoint\_type) | Specify the IBM Log Analysis instance endpoint type (public or private) to use. Used to construct the ingestion endpoint. | `string` | `"private"` | no |
| [log\_analysis\_ingestion\_key](#input\_log\_analysis\_ingestion\_key) | Ingestion key for the Log Analysis agent to communicate with the instance. | `string` | `null` | no |
| [log\_analysis\_instance\_region](#input\_log\_analysis\_instance\_region) | The IBM Log Analysis instance region. Used to construct the ingestion endpoint. | `string` | `null` | no |
| [log\_analysis\_secret\_name](#input\_log\_analysis\_secret\_name) | The name of the secret which will store the ingestion key. | `string` | `"logdna-agent"` | no |
| [manage\_all\_addons](#input\_manage\_all\_addons) | Whether Terraform manages all cluster add-ons, even add-ons installed outside of the module. If set to 'true', this module destroys the add-ons installed by other sources. | `bool` | `false` | no |
| [number\_of\_lbs](#input\_number\_of\_lbs) | The number of load balancer to associate with the `additional_lb_security_group_names` security group. Must match the number of load balancers that are associated with the cluster | `number` | `1` | no |
| [ocp\_entitlement](#input\_ocp\_entitlement) | Value that is applied to the entitlements for OCP cluster provisioning | `string` | `"cloud_pak"` | no |
| [ocp\_version](#input\_ocp\_version) | The version of the OpenShift cluster that should be provisioned (format 4.x). This is only used during initial cluster provisioning, but ignored for future updates. Supports passing the string 'default' (current IKS default recommended version). If no value is passed, it will default to 'default'. | `string` | `null` | no |
| [operating\_system](#input\_operating\_system) | The operating system of the workers in the default worker pool. If no value is specified, the current default version OS will be used. See https://cloud.ibm.com/docs/openshift?topic=openshift-openshift_versions#openshift_versions_available . | `string` | `null` | no |
| [region](#input\_region) | The IBM Cloud region where all resources will be provisioned. | `string` | n/a | yes |
| [resource\_group\_id](#input\_resource\_group\_id) | The IBM Cloud resource group ID to provision all resources in. | `string` | n/a | yes |
| [use\_existing\_cos](#input\_use\_existing\_cos) | Flag indicating whether or not to use an existing COS instance for OpenShift internal registry storage. Only applicable if 'enable\_registry\_storage' is true | `bool` | `false` | no |
| [verify\_worker\_network\_readiness](#input\_verify\_worker\_network\_readiness) | By setting this to true, a script will run kubectl commands to verify that all worker nodes can communicate successfully with the master. If the runtime does not have access to the kube cluster to run kubectl commands, this should be set to false. | `bool` | `true` | no |
| [vpc\_id](#input\_vpc\_id) | The ID of the VPC to use. | `string` | n/a | yes |
| [vpc\_subnets](#input\_vpc\_subnets) | Subnet metadata by VPC tier. |map(list(object({| n/a | yes |
id = string
zone = string
cidr_block = string
})))
| [worker\_pools](#input\_worker\_pools) | List of worker pools |list(object({|
subnet_prefix = optional(string)
vpc_subnets = optional(list(object({
id = string
zone = string
cidr_block = string
})))
pool_name = string
machine_type = string
workers_per_zone = number
resource_group_id = optional(string)
operating_system = optional(string)
labels = optional(map(string))
minSize = optional(number)
maxSize = optional(number)
enableAutoscaling = optional(bool)
boot_volume_encryption_kms_config = optional(object({
crk = string
kms_instance_id = string
kms_account_id = optional(string)
}))
}))[| no |
{
"enableAutoscaling": true,
"labels": {},
"machine_type": "bx2.4x16",
"maxSize": 3,
"minSize": 1,
"pool_name": "default",
"subnet_prefix": "zone-1",
"workers_per_zone": 2
},
{
"enableAutoscaling": true,
"labels": {
"dedicated": "zone-2"
},
"machine_type": "bx2.4x16",
"maxSize": 3,
"minSize": 1,
"pool_name": "zone-2",
"subnet_prefix": "zone-2",
"workers_per_zone": 2
},
{
"enableAutoscaling": true,
"labels": {
"dedicated": "zone-3"
},
"machine_type": "bx2.4x16",
"maxSize": 3,
"minSize": 1,
"pool_name": "zone-3",
"subnet_prefix": "zone-3",
"workers_per_zone": 2
}
]### Outputs
| Name | Description |
|------|-------------|
| [cluster\_crn](#output\_cluster\_crn) | CRN for the created cluster |
| [cluster\_id](#output\_cluster\_id) | ID of cluster created |
| [cluster\_name](#output\_cluster\_name) | Name of the created cluster |
| [cos\_crn](#output\_cos\_crn) | The IBM Cloud Object Storage instance CRN used to back up the internal registry in the OCP cluster. |
| [ingress\_hostname](#output\_ingress\_hostname) | The hostname that was assigned to the OCP clusters Ingress subdomain. |
| [master\_url](#output\_master\_url) | The URL of the Kubernetes master. |
| [ocp\_version](#output\_ocp\_version) | Openshift Version of the cluster |
| [private\_service\_endpoint\_url](#output\_private\_service\_endpoint\_url) | Private service endpoint URL |
| [public\_service\_endpoint\_url](#output\_public\_service\_endpoint\_url) | Public service endpoint URL |
| [region](#output\_region) | Region cluster is deployed in |
| [resource\_group\_id](#output\_resource\_group\_id) | Resource group ID the cluster is deployed in |
| [vpc\_id](#output\_vpc\_id) | ID of the clusters VPC |
| [vpe\_url](#output\_vpe\_url) | The virtual private endpoint URL of the Kubernetes cluster. |
| [workerpools](#output\_workerpools) | Worker pools created |## Contributing
You can report issues and request features for this module in GitHub issues in the module repo. See [Report an issue or request a feature](https://github.com/terraform-ibm-modules/.github/blob/main/.github/SUPPORT.md).
To set up your local development environment, see [Local development setup](https://terraform-ibm-modules.github.io/documentation/#/local-dev-setup) in the project documentation.