https://github.com/oracle-quickstart/oke-prerequisites
Instructions on how to setup an Oracle Kubernetes Engine (OKE) cluster
https://github.com/oracle-quickstart/oke-prerequisites
cloud kubectl kubernetes oke oracle oracle-led terraform
Last synced: 8 months ago
JSON representation
Instructions on how to setup an Oracle Kubernetes Engine (OKE) cluster
- Host: GitHub
- URL: https://github.com/oracle-quickstart/oke-prerequisites
- Owner: oracle-quickstart
- License: upl-1.0
- Created: 2018-10-12T18:52:07.000Z (about 7 years ago)
- Default Branch: master
- Last Pushed: 2019-10-10T00:35:57.000Z (about 6 years ago)
- Last Synced: 2025-03-27T08:58:17.653Z (9 months ago)
- Topics: cloud, kubectl, kubernetes, oke, oracle, oracle-led, terraform
- Language: HCL
- Size: 5.97 MB
- Stars: 2
- Watchers: 5
- Forks: 4
- Open Issues: 2
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# oke-prerequisites
These are instructions on how to setup an Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE) cluster along with a Terraform module to automate part of that process for use with the Oracle Cloud Infrastructure Quick Start examples.
## Prerequisites
First off you'll need to do some pre deploy setup. That's all detailed [here](https://github.com/oracle/oci-quickstart-prerequisites).
## Clone the Module
Now, you'll want a local copy of this repo. You can make that with the commands:
git clone https://github.com/oracle/oke-quickstart-prerequisites.git
cd oke-quickstart-prerequisites/terraform
ls

We now need to initialize the directory with the module in it. This makes the module aware of the OCI provider. You can do this by running:
terraform init
This gives the following output:

## Deploy
Now for the main attraction. Let's make sure the plan looks good:
terraform plan
That gives:

If that's good, we can go ahead and apply the deploy:
terraform apply
You'll need to enter `yes` when prompted. The apply should take about five minutes to run. Once complete, you'll see something like this:

## Viewing the Cluster in the Console
We can check out our new cluster in the console by navigating [here](https://console.us-phoenix-1.oraclecloud.com/containers/clusters).

Similarly, the IaaS machines running the cluster are viewable [here](https://console.us-phoenix-1.oraclecloud.com/a/compute/instances).

## Setup the Terminal
To interact with our cluster, we need `kubectl` on our local machine. Instructions for that are [here](https://kubernetes.io/docs/tasks/tools/install-kubectl/). I'm a big fan of easy and on a Mac, so I just ran:
brew install kubectl
That gave me this:

We're also probably going to want `helm`. Once again, brew is our friend. If you're on another platform, take a look [here](https://github.com/helm/helm).
brew install kubernetes-helm
That gave me this:

The terraform apply dumped a Kubernetes config file called config. By default, `kubectl` expects the config file to be in `~/.kube/config`. So, we can put it there by running:
mkdir ~/.kube
mv config ~/.kube
We can make sure this all worked by running this command to check out the nodes in our cluster:
kubectl get nodes
That should give something like:

## Make yourself Admin
You probably want your `kubectl` set up so that you're a cluster admin. Otherwise your access to your new cluster will be limited. There are some instructions on that [here](https://docs.cloud.oracle.com/iaas/Content/ContEng/Concepts/contengaboutaccesscontrol.htm). You'll need to grab your user OCID (possibly from the console, [here](https://console.us-phoenix-1.oraclecloud.com/a/identity/users)) and then run a command like:
kubectl create clusterrolebinding myadmin --clusterrole=cluster-admin --user=ocid1.user.oc1..aaaaa...zutq
That gives this:

## Destroy the Deployment
When you no longer need the OKE cluster, you can run this to delete the deployment:
terraform destroy
You'll need to enter `yes` when prompted. Once complete, you'll see something like this:
