https://github.com/oracle-quickstart/oci-wandisco
Terraform module to deploy WANdisco Fusion on Oracle Cloud Infrastructure (OCI)
https://github.com/oracle-quickstart/oci-wandisco
cloud fusion oci oracle partner-led terraform wandisco
Last synced: 2 months ago
JSON representation
Terraform module to deploy WANdisco Fusion on Oracle Cloud Infrastructure (OCI)
- Host: GitHub
- URL: https://github.com/oracle-quickstart/oci-wandisco
- Owner: oracle-quickstart
- License: apache-2.0
- Archived: true
- Created: 2018-12-22T03:50:40.000Z (over 6 years ago)
- Default Branch: master
- Last Pushed: 2020-02-11T06:26:52.000Z (over 5 years ago)
- Last Synced: 2025-02-19T21:12:46.669Z (4 months ago)
- Topics: cloud, fusion, oci, oracle, partner-led, terraform, wandisco
- Language: HCL
- Homepage:
- Size: 11.5 MB
- Stars: 0
- Watchers: 8
- Forks: 1
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# oci-wandisco
These are Terraform modules that deploy [WANdisco](https://www.wandisco.com/) on [Oracle Cloud Infrastructure (OCI)](https://cloud.oracle.com/en_US/cloud-infrastructure). They are developed jointly by Oracle and WANdisco.## Prerequisites
First off you'll need to do some pre-deploy setup. That's all detailed [here](https://github.com/oracle-quickstart/oci-prerequisites).## Clone the Module
Now, you'll want a local copy of this repo. You can make that with the commands:git clone https://github.com/oracle-quickstart/oci-wandisco.git
cd oci-wandisco
cd terraform1
lsThat should give you this:

## Setup
The goal is the keep OCI Object Storage data in sync contained in two regions. For replication with WANdisco Fusion, we need to provision at least two servers - one in each region.### Buckets and Regions
In this example, we will setup object storage data replication between two OCI regions: us-ashburn-1 and us-phoenix-1.To configure the software, you will need two storage containers (buckets). These buckets will contain the Fusion metadata, so let's name them both *fusion_metadata*. So for each region, we must make a bucket with this name. Other storage buckets may contain the actual user data. If these do not exist, you can make them later before you establish the replication rules in Fusion.
If you have not set up your OCI user account for use with an S3 API or cli, make your Customer Secret Keys to create an access key and secret key. This is described [here](https://docs.cloud.oracle.com/iaas/Content/Identity/Tasks/managingcredentials.htm?Highlight=secretkey#). These extra keys are used as shown below in addition to the user-specific keys you discovered in the pre-req exercise.
Before you perform the Terraform tasks, you will need to update the [variables.tf](./terraform1/variables.tf) file with values specific to your account and needs. You just need enter the variable information in the file in `terraform1` as the second version is a linked file. The region is specified in the `region.tf` file for each instance. Again, information about the various OCI variables and where to obtain them is described in more detail the previous pre-requisite section. Here is list of the variables you will need to supply at this point:
```
### OCI Profile
variable "tenant" {default = "your_tenancy_name"}
variable "tenancy_ocid" {default = "ocid1.tenancy.oc1..key"}
variable "compartment_ocid" {default = "ocid1.tenancy.oc1..key"}
variable "user_ocid" {default = "ocid1.user.oc1..key"}
variable "fingerprint" {default = "key"}variable "ssh_public_key" { }
# Object Storage
variable "bucket" {default = "fusion_metadata"}
variable "accesskey" {default = "ocid1.credential.oc1..key"}
variable "secretkey" {default = "your_secret_key"}
variable "endpointurl" {
type = "map"
default = {
us-phoenix-1 = "https://.compat.objectstorage.us-phoenix-1.oraclecloud.com"
us-ashburn-1 = "https://.compat.objectstorage.us-ashburn-1.oraclecloud.com"
}
}
```## Init
So we now need to initialize the each directory with the modules in them. This makes the modules aware of the OCI provider.
You can do this by running these commands:cd ../terraform1
terraform initcd ../terraform2
terraform initThis gives the following output:

## Plan
Let's make sure the *terraform plan* looks good:
cd ../terraform1
terraform plancd ../terraform2
terraform planAnd this output should look like:

## Deploy
If that all looks good as a pre-flight check, we can go ahead and do an *apply* - which deploys the plan:cd ../terraform1
terraform apply -auto-approve |tee apply.txtcd ../terraform2
terraform apply -auto-approve |tee apply.txtThe `terraform apply` for each server should each take about 2 minutes to run. Once each process is complete, you'll see something like this:

Make note of the Public IP and URL for each session. But don't worry, this info is also captured in the apply.txt file or by running `terraform refresh`.
When the `terraform apply` task has finished, the infrastructure will be deployed and *cloud-init* scripts run to deploy Fusion on the server. Those scripts will "wrap up" asynchronously to the server provisioning process, meaning the cloud-init process runs on the servers right after they are booted up for the first time. So, it'll be a few more minutes (about 3) before the Fusion application is accessible on each server.
## Connect to the UI
When the `terraform apply` completed, it prints the output of the task - a few lines about each server:
(1) the URL to access the Fusion UI and
(2) the Public IP and hostname.Now that you have waited a few minutes, let's try accessing the UI on port 8083 of the public IP for first (Phoenix) server. You should see this:

Now enter the username and password you specified in [variables.tf](./terraform1/variables.tf). You should now see the Fusion Dashboard.

At this point, the two servers are not yet configured as a replication pair. We must *induct* the other server to create a dual-zone membership. After that process has completed (takes a minute or two), we can then create the actual replication policies. These are actual rules or definitions of which buckets to replicate. Fusion will create a proxy server with a virtual object storage bucket that mirrors data into the underlying buckets residing in each region.
Click on the "Nodes Tab" along the top, and then click on the "Induct" Button.
If you are on the *Phoenix* server, enter the *Pubic IP* of the *Ashburn* server.
## SSH to the Server
These machines are using Oracle Enterprise Linux (OEL). The default login user is *opc*. You can SSH into the machine with a command like this:ssh -i ~/.ssh/oci opc@
Fusion is installed under `/opt/wandisco` and has configuration files under `/etc/wandisco`.
You can debug deployments by investigating the cloud-init output file in the home directory and/or look in `/var/log/messages`. Note: You'll need to be root, so run `sudo` to be able to read it.For convenience, on each server you can add the other server name to the hosts file. As generated by the apply task, "tail" the apply.txt files, and cut/paste the IP and hostname.
Add the other host to the server in Phoenix:
ssh [email protected]
echo "129.213.53.56 fusion-server.ashburn.fusion.oraclevcn.com" | sudo tee -a /etc/hostsAdd the other host to the server in Ashburn:
ssh [email protected]
echo "129.146.162.177 fusion-server.phoenix.fusion.oraclevcn.com" | sudo tee -a /etc/hostsAdd both hosts your own /etc/hosts file (if on say macOS):
echo "129.146.162.177 fusion-server.phoenix.fusion.oraclevcn.com" | sudo tee -a /etc/hosts
echo "129.213.53.56 fusion-server.ashburn.fusion.oraclevcn.com" | sudo tee -a /etc/hosts
## View the Server in the Console
You can also login to the web console [here](https://console.us-ashburn-1.oraclecloud.com/a/compute/instances) to view info about the server running in OCI.
The click on the instance name "fusion_server" to see its details as shown here:
## Destroy the Deployment
When you no longer need the deployment, you can run these commands to destroy the OCI infrastructure you just built.cd ../terraform1
terraform destroycd ../terraform2
terraform destroyYou'll need to enter `yes` when prompted. Once complete, you'll see something like this:

Thanks for testing out Fusion with Multicloud on OCI!