https://github.com/redhat-developer-demos/devx-workshop-workload
The OpenShift setup required to run Workshops for Istio and Knative Tutorials
https://github.com/redhat-developer-demos/devx-workshop-workload
Last synced: 3 months ago
JSON representation
The OpenShift setup required to run Workshops for Istio and Knative Tutorials
- Host: GitHub
- URL: https://github.com/redhat-developer-demos/devx-workshop-workload
- Owner: redhat-developer-demos
- Created: 2019-04-29T06:49:24.000Z (about 6 years ago)
- Default Branch: master
- Last Pushed: 2019-04-29T06:53:24.000Z (about 6 years ago)
- Last Synced: 2025-01-23T07:48:08.215Z (5 months ago)
- Size: 2.93 KB
- Stars: 1
- Watchers: 7
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: readme.adoc
Awesome Lists containing this project
README
= ocp4-workload-istio-controlplane - Deploy the Istio control plane
== Role overview
* This role deploys the Istio control plane. It consists of the following playbooks:
** Playbook: link:./tasks/pre_workload.yml[pre_workload.yml] - Sets up an
environment for the workload deployment.
*** Debug task will print out: `pre_workload Tasks completed successfully.`** Playbook: link:./tasks/workload.yml[workload.yml] - Used to deploy Istio
*** Debug task will print out: `workload Tasks completed successfully.`** Playbook: link:./tasks/post_workload.yml[post_workload.yml] - Used to
configure the workload after deployment
*** This role doesn't do anything here
*** Debug task will print out: `post_workload Tasks completed successfully.`** Playbook: link:./tasks/remove_workload.yml[remove_workload.yml] - Used to
delete the workload
*** This role removes the logging deployment and project but not the operator configs
*** Debug task will print out: `remove_workload Tasks completed successfully.`== Review the defaults variable file
* This file link:./defaults/main.yml[./defaults/main.yml] contains all the variables you need to define to control the deployment of your workload.
* The variable *ocp_username* is mandatory to assign the workload to the correct OpenShift user.
* A variable *silent=True* can be passed to suppress debug messages.
* You can modify any of these default values by adding `-e "variable_name=variable_value"` to the command line=== Deploy a Workload with the `ocp-workload` playbook [Mostly for testing]
----
TARGET_HOST="bastion.na311.openshift.opentlc.com"
OCP_USERNAME="shacharb-redhat.com"
WORKLOAD="ocp-workload-enable-service-broker"
GUID=1001# a TARGET_HOST is specified in the command line, without using an inventory file
ansible-playbook -i ${TARGET_HOST}, ./configs/ocp-workloads/ocp-workload.yml \
-e"ansible_ssh_private_key_file=~/.ssh/keytoyourhost.pem" \
-e"ansible_user=ec2-user" \
-e"ocp_username=${OCP_USERNAME}" \
-e"ocp_workload=${WORKLOAD}" \
-e"silent=False" \
-e"guid=${GUID}" \
-e"ACTION=create"
----=== To Delete an environment
----
TARGET_HOST="bastion.na311.openshift.opentlc.com"
OCP_USERNAME="opentlc-mgr"
WORKLOAD="ocp4-workload-infra-nodes"
GUID=1002# a TARGET_HOST is specified in the command line, without using an inventory file
ansible-playbook -i ${TARGET_HOST}, ./configs/ocp-workloads/ocp-workload.yml \
-e"ansible_ssh_private_key_file=~/.ssh/keytoyourhost.pem" \
-e"ansible_user=ec2-user" \
-e"ocp_username=${OCP_USERNAME}" \
-e"ocp_workload=${WORKLOAD}" \
-e"guid=${GUID}" \
-e"ACTION=remove"
----== Other related information:
=== Deploy Workload on OpenShift Cluster from an existing playbook:
[source,yaml]
----
- name: Deploy a workload role on a master host
hosts: all
become: true
gather_facts: False
tags:
- step007
roles:
- { role: "{{ocp_workload}}", when: 'ocp_workload is defined' }
----
NOTE: You might want to change `hosts: all` to fit your requirements=== Set up your Ansible inventory file
* You can create an Ansible inventory file to define your connection method to your host (Master/Bastion with `oc` command)
* You can also use the command line to define the hosts directly if your `ssh` configuration is set to connect to the host correctly
* You can also use the command line to use localhost or if your cluster is already authenticated and configured in your `oc` configuration.Example inventory file
[source, ini]
----
[gptehosts:vars]
ansible_ssh_private_key_file=~/.ssh/keytoyourhost.pem
ansible_user=ec2-user[gptehosts:children]
openshift[openshift]
bastion.cluster1.openshift.opentlc.com
bastion.cluster2.openshift.opentlc.com
bastion.cluster3.openshift.opentlc.com
bastion.cluster4.openshift.opentlc.com[dev]
bastion.cluster1.openshift.opentlc.com
bastion.cluster2.openshift.opentlc.com[prod]
bastion.cluster3.openshift.opentlc.com
bastion.cluster4.openshift.opentlc.com
----