https://github.com/oracle-quickstart/oci-oss-serverless-processing
Quick start showcasing serverless processing using Streams, Connector Hub and functions
https://github.com/oracle-quickstart/oci-oss-serverless-processing
Last synced: 2 months ago
JSON representation
Quick start showcasing serverless processing using Streams, Connector Hub and functions
- Host: GitHub
- URL: https://github.com/oracle-quickstart/oci-oss-serverless-processing
- Owner: oracle-quickstart
- License: upl-1.0
- Archived: true
- Created: 2022-04-07T04:40:32.000Z (about 3 years ago)
- Default Branch: main
- Last Pushed: 2022-04-07T06:28:13.000Z (about 3 years ago)
- Last Synced: 2025-02-19T21:12:47.802Z (4 months ago)
- Language: HCL
- Size: 85 KB
- Stars: 0
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# oci-oss-serverless-processing
Quickstart demonstrating the serverless processing using OCI Streams, Service Connector hub, functions and AI language service.
The solution publishes the stream of tweets from twitter using Kafka connector to OCI streams. The stream of tweets is processed using functions which call the AI service to recognize phrases in the tweets. The output of the AI service is placed in another output stream. Stream as a source, functions as a task and Streams as the target is configured in the SCH to orchestrate the flow.## Prerequisites
- Permission to `manage` the following types of resources
- VCNS, InternetGateways, RouteTables, Subnets
- Compute Instances
- Stream Pools
- Streams
- Connect Harness
- Functions
- AI language service- Quota to create the above resources.
If you don't have the required permissions and quota, contact your tenancy administrator. See [Policy Reference](https://docs.cloud.oracle.com/en-us/iaas/Content/Identity/Reference/policyreference.htm), [Service Limits](https://docs.cloud.oracle.com/en-us/iaas/Content/General/Concepts/servicelimits.htm), [Compartment Quotas](https://docs.cloud.oracle.com/iaas/Content/General/Concepts/resourcequotas.htm).
## Deploy Using Oracle Resource Manager
1. Click [](https://cloud.oracle.com/resourcemanager/stacks/create?region=home&zipUrl=https://github.com/oracle-quickstart/oci-oss-serverless-processing/releases/latest/download/oci-oss-serverless-processing-latest.zip)
If you aren't already signed in, when prompted, enter the tenancy and user credentials.
2. Review and accept the terms and conditions.
3. Select the region where you want to deploy the stack.
4. Follow the on-screen prompts and instructions to create the stack.
5. After creating the stack, click **Terraform Actions**, and select **Plan**.
6. Wait for the job to be completed, and review the plan.
To make any changes, return to the Stack Details page, click **Edit Stack**, and make the required changes. Then, run the **Plan** action again.
7. If no further changes are necessary, return to the Stack Details page, click **Terraform Actions**, and select **Apply**.
## Deploy Using the Terraform CLI
### Clone the Module
Now, you'll want a local copy of this repo. You can make that with the commands:git clone https://github.com/oracle-quickstart/oci-oss-serverless-processing.git
cd oci-oss-serverless-processing
ls### Set Up and Configure Terraform
1. Complete the prerequisites described [here](https://github.com/cloud-partners/oci-prerequisites).
2. Create a `terraform.tfvars` file, and specify the following variables:
```
# Authentication
tenancy_ocid = ""
current_user_ocid = ""
fingerprint = ""
private_key_path = ""# Region
region = ""# Compartment
compartment_ocid = ""# OCIR credentials
ocir_user_name = ""
ocir_user_password = ""# twitter credentials
twitter_oauth_accessToken = ""
twitter_oauth_accessTokenSecret = ""
twitter_oauth_consumerKey = ""
twitter_oauth_consumerSecret = ""````
### Create the Resources
Run the following commands:terraform init
terraform plan
terraform apply### Destroy the Deployment
When you no longer need the deployment, you can run this command to destroy the resources:terraform destroy
## Post Deployment
Once the resources are deployed either through CLI or through resource manager run the following commands on the instance. (Use the private key generated to login to the instance)
- Start the kafka connect. Configuration file has allready been created as part of deployment.
- Call the rest endpoint to configure the twitter connector. Configuration for connector copied as part of the stack.```
nohup ./kafka/bin/connect-distributed.sh connect-distributed.properties >> connect.logs &
curl -i -X POST -H "Accept:application/json" -H "Content-Type:application/json" localhost:8083/connectors/ -d @twitter_connector.json
````## Quick Start Architecture
