https://github.com/julien-muke/ai-image-recognition-terraform
How to Built a Serverless AI Image Recognition App Using AWS Bedrock, Rekognition & Terraform.
https://github.com/julien-muke/ai-image-recognition-terraform
amazon-bedrock amazon-rekognition terraform
Last synced: 3 months ago
JSON representation
How to Built a Serverless AI Image Recognition App Using AWS Bedrock, Rekognition & Terraform.
- Host: GitHub
- URL: https://github.com/julien-muke/ai-image-recognition-terraform
- Owner: julien-muke
- Created: 2025-06-11T09:20:44.000Z (4 months ago)
- Default Branch: main
- Last Pushed: 2025-06-11T11:03:04.000Z (4 months ago)
- Last Synced: 2025-06-11T11:28:49.587Z (4 months ago)
- Topics: amazon-bedrock, amazon-rekognition, terraform
- Homepage:
- Size: 5.86 KB
- Stars: 0
- Watchers: 0
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
#  AI Image Recognition System with AWS Bedrock, Rekognition & Terraform
Serverless Image Analysis on AWS Using Bedrock, Rekognition & IaC with Terraform
Build this hands-on demo step by step with my detailed tutorial on Julien Muke YouTube. Feel free to subscribe π!
## π¨ Tutorial
This repository contains the steps corresponding to an in-depth tutorial available on my YouTube
channel, Julien Muke.If you prefer visual learning, this is the perfect resource for you. Follow my tutorial to learn how to build projects
like these step-by-step in a beginner-friendly manner!Welcome to this exciting hands-on project where we build a complete AI image analysis system on AWS, combining computer vision and generative AI in a fully serverless architecture, all deployed and managed using Terraform.
## π Overview
In this project, weβll use Amazon Rekognition to detect objects, scenes, and concepts in an image, then pass those results to Amazon Bedrock (Titan model) to generate a human-readable summary. The frontend allows users to upload images and get insightful, AI-generated descriptions, all with zero servers to manage!β’ Amazon Rekognition β Detects objects, scenes, and labels in images
β’ Amazon Bedrock (Titan) β Converts labels into descriptive text using generative AI
β’ AWS Lambda (Python) β Processes requests and orchestrates AI services
β’ Amazon API Gateway β Exposes our backend via a RESTful API
β’ Amazon S3 β Hosts a static frontend (HTML/CSS/JS)
β’ Terraform β Provisions the full infrastructure as code (IaC)Before you begin, ensure you have the following set up:
β’ **AWS Account**: An active AWS account with administrative privileges to create the necessary resources.
β’ **AWS CLI**: The AWS Command Line Interface installed and configured with your credentials.
β’ **Terraform**: Terraform installed on your local machine. You can verify the installation by running `terraform --version`
β’ **Node.js, npm and Python**: Required for managing frontend dependencies if you choose to expand the project.
β’ **Model Access in Amazon Bedrock**: You must enable access to the foundation models you intend to use. For this project, navigate to the Amazon Bedrock console, go to Model access, and request access to Titan Image Generator G1.## β‘οΈ Step 1 - Project Structure
First, let's organize our project files. Create a main directory for your project, and inside it, create the following structure:
ai-image-recognition-terraform/
βββ terraform/
β βββ main.tf
β βββ variables.tf
β βββ outputs.tf
βββ lambda/
β βββ image_analyzer.py
βββ frontend/
βββ index.html
βββ style.css
βββ script.js## β‘οΈ Step 2 - Backend Development with Python and Lambda
We'll start by writing the Python code for our Lambda function. This function will be the brains of our operation.
lambda/image_analyzer.py
```py
import json
import boto3
import base64# Initialize AWS clients
rekognition = boto3.client('rekognition')
bedrock_runtime = boto3.client('bedrock-runtime')def lambda_handler(event, context):
"""
This Lambda function analyzes an image provided as a base64 encoded string.
It uses Rekognition to detect labels and Bedrock (Titan) to generate a
human-readable description.
"""
try:
# Get the base64 encoded image from the request body
body = json.loads(event.get('body', '{}'))
image_base64 = body.get('image')if not image_base64:
return {
'statusCode': 400,
'body': json.dumps({'error': 'No image provided in the request body.'})
}# Decode the base64 string
image_bytes = base64.b64decode(image_base64)# 1. Analyze image with AWS Rekognition
rekognition_response = rekognition.detect_labels(
Image={'Bytes': image_bytes},
MaxLabels=10,
MinConfidence=80
)
labels = [label['Name'] for label in rekognition_response['Labels']]if not labels:
return {
'statusCode': 200,
'body': json.dumps({
'labels': [],
'description': "Could not detect any labels with high confidence. Please try another image."
})
}# 2. Enhance results with Amazon Bedrock
# Create a prompt for the Titan model
prompt = f"Based on the following labels detected in an image: {', '.join(labels)}. Please generate a single, descriptive sentence about the image."# Configure the payload for the Bedrock model
bedrock_payload = {
"inputText": prompt,
"textGenerationConfig": {
"maxTokenCount": 100,
"stopSequences": [],
"temperature": 0.7,
"topP": 0.9
}
}# Invoke the Bedrock model
bedrock_response = bedrock_runtime.invoke_model(
body=json.dumps(bedrock_payload),
modelId='amazon.titan-text-express-v1',
contentType='application/json',
accept='application/json'
)response_body = json.loads(bedrock_response['body'].read())
description = response_body['results'][0]['outputText'].strip()# 3. Return the results
return {
'statusCode': 200,
'headers': {
'Access-Control-Allow-Origin': '*', # Enable CORS
'Access-Control-Allow-Headers': 'Content-Type',
'Access-Control-Allow-Methods': 'OPTIONS,POST'
},
'body': json.dumps({
'labels': labels,
'description': description
})
}except Exception as e:
return {
'statusCode': 500,
'body': json.dumps({'error': str(e)})
}
```β οΈNote: This script uses the `boto3` AWS SDK for Python. It will perform the following actions:
1. Receive a base64-encoded image from the API Gateway.
2. Decode the image.
3. Send the image to Amazon Rekognition to detect labels.
4. Create a prompt with these labels and send it to Amazon Bedrock.
5. Return the labels and the AI-generated description.## β‘οΈ Step 3 - Infrastructure as Code with Terraform
Now, let's define all the AWS resources needed for our backend using Terraform.
Define some variables to make your configuration reusable.
terraform/variables.tf
```tf
variable "aws_region" {
description = "The AWS region to deploy resources in."
type = string
default = "us-east-1"
}variable "project_name" {
description = "A unique name for the project to prefix resources."
type = string
default = "ai-image-analyzer"
}
```This is the main configuration file where we define all our resources.
terraform/main.tf
```tf
# ==============================================================================
# Provider Configuration
# ==============================================================================
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}provider "aws" {
region = var.aws_region
}# ==============================================================================
# IAM Role and Policies for Lambda
# ==============================================================================
resource "aws_iam_role" "lambda_exec_role" {
name = "${var.project_name}-lambda-exec-role"assume_role_policy = jsonencode({
Version = "2012-10-17",
Statement = [{
Action = "sts:AssumeRole",
Effect = "Allow",
Principal = {
Service = "lambda.amazonaws.com"
}
}]
})
}resource "aws_iam_policy" "lambda_logging_policy" {
name = "${var.project_name}-lambda-logging-policy"
description = "IAM policy for Lambda to write logs to CloudWatch"policy = jsonencode({
Version = "2012-10-17",
Statement = [{
Action = [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
Effect = "Allow",
Resource = "arn:aws:logs:*:*:*"
}]
})
}resource "aws_iam_policy" "lambda_ai_services_policy" {
name = "${var.project_name}-ai-services-policy"
description = "IAM policy for Lambda to access Rekognition and Bedrock"policy = jsonencode({
Version = "2012-10-17",
Statement = [
{
Action = "rekognition:DetectLabels",
Effect = "Allow",
Resource = "*"
},
{
Action = "bedrock:InvokeModel",
Effect = "Allow",
Resource = "arn:aws:bedrock:${var.aws_region}::foundation-model/amazon.titan-text-express-v1"
}
]
})
}resource "aws_iam_role_policy_attachment" "lambda_logs_attach" {
role = aws_iam_role.lambda_exec_role.name
policy_arn = aws_iam_policy.lambda_logging_policy.arn
}resource "aws_iam_role_policy_attachment" "lambda_ai_services_attach" {
role = aws_iam_role.lambda_exec_role.name
policy_arn = aws_iam_policy.lambda_ai_services_policy.arn
}# ==============================================================================
# Lambda Function
# ==============================================================================
data "archive_file" "lambda_zip" {
type = "zip"
source_dir = "../lambda/"
output_path = "${path.module}/image_analyzer.zip"
}resource "aws_lambda_function" "image_analyzer_lambda" {
filename = data.archive_file.lambda_zip.output_path
function_name = "${var.project_name}-function"
role = aws_iam_role.lambda_exec_role.arn
handler = "image_analyzer.lambda_handler"
runtime = "python3.9"
timeout = 30
source_code_hash = data.archive_file.lambda_zip.output_base64sha256
}# ==============================================================================
# API Gateway (Simplified for Lambda Proxy)
# ==============================================================================
resource "aws_api_gateway_rest_api" "api" {
name = "${var.project_name}-api"
description = "API for the Image Analyzer"
}resource "aws_api_gateway_resource" "resource" {
rest_api_id = aws_api_gateway_rest_api.api.id
parent_id = aws_api_gateway_rest_api.api.root_resource_id
path_part = "analyze"
}resource "aws_api_gateway_method" "method" {
rest_api_id = aws_api_gateway_rest_api.api.id
resource_id = aws_api_gateway_resource.resource.id
http_method = "POST"
authorization = "NONE"
}resource "aws_api_gateway_integration" "integration" {
rest_api_id = aws_api_gateway_rest_api.api.id
resource_id = aws_api_gateway_resource.resource.id
http_method = aws_api_gateway_method.method.http_method
integration_http_method = "POST"
type = "AWS_PROXY"
uri = aws_lambda_function.image_analyzer_lambda.invoke_arn
}# This OPTIONS method is still needed for the browser's preflight request for CORS
resource "aws_api_gateway_method" "options_method" {
rest_api_id = aws_api_gateway_rest_api.api.id
resource_id = aws_api_gateway_resource.resource.id
http_method = "OPTIONS"
authorization = "NONE"
}resource "aws_api_gateway_integration" "options_integration" {
rest_api_id = aws_api_gateway_rest_api.api.id
resource_id = aws_api_gateway_resource.resource.id
http_method = aws_api_gateway_method.options_method.http_method
type = "MOCK"# The MOCK integration returns a success response with the necessary headers.
request_templates = {
"application/json" = "{\"statusCode\": 200}"
}
}resource "aws_api_gateway_method_response" "options_response" {
rest_api_id = aws_api_gateway_rest_api.api.id
resource_id = aws_api_gateway_resource.resource.id
http_method = aws_api_gateway_method.options_method.http_method
status_code = "200"response_parameters = {
"method.response.header.Access-Control-Allow-Headers" = true,
"method.response.header.Access-Control-Allow-Methods" = true,
"method.response.header.Access-Control-Allow-Origin" = true
}
}resource "aws_api_gateway_integration_response" "options_integration_response" {
rest_api_id = aws_api_gateway_rest_api.api.id
resource_id = aws_api_gateway_resource.resource.id
http_method = aws_api_gateway_method.options_method.http_method
status_code = aws_api_gateway_method_response.options_response.status_coderesponse_parameters = {
"method.response.header.Access-Control-Allow-Headers" = "'Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token'",
"method.response.header.Access-Control-Allow-Methods" = "'OPTIONS,POST'",
"method.response.header.Access-Control-Allow-Origin" = "'*'"
}
depends_on = [aws_api_gateway_integration.options_integration]
}# --- Deployment Resources ---
resource "aws_lambda_permission" "api_gateway_permission" {
statement_id = "AllowAPIGatewayInvoke"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.image_analyzer_lambda.function_name
principal = "apigateway.amazonaws.com"
source_arn = "${aws_api_gateway_rest_api.api.execution_arn}/*/*"
}resource "aws_api_gateway_deployment" "deployment" {
rest_api_id = aws_api_gateway_rest_api.api.id# This ensures a new deployment happens when any part of the API changes.
triggers = {
redeployment = sha1(jsonencode(aws_api_gateway_rest_api.api.body))
}lifecycle {
create_before_destroy = true
}
}resource "aws_api_gateway_stage" "stage" {
deployment_id = aws_api_gateway_deployment.deployment.id
rest_api_id = aws_api_gateway_rest_api.api.id
stage_name = "v1"
}# ==============================================================================
# S3 Bucket for Frontend Hosting (Modern Syntax)
# ==============================================================================resource "random_id" "bucket_suffix" {
byte_length = 8
}resource "aws_s3_bucket" "frontend_bucket" {
bucket = "${var.project_name}-frontend-${random_id.bucket_suffix.hex}"
force_destroy = truetags = {
Name = "${var.project_name}-frontend"
Environment = var.environment
}
}resource "aws_s3_bucket_website_configuration" "frontend_website" {
bucket = aws_s3_bucket.frontend_bucket.idindex_document {
suffix = "index.html"
}error_document {
key = "index.html"
}depends_on = [aws_s3_bucket.frontend_bucket]
}resource "aws_s3_bucket_public_access_block" "frontend_public_access" {
bucket = aws_s3_bucket.frontend_bucket.id
block_public_acls = false
block_public_policy = false
ignore_public_acls = false
restrict_public_buckets = falsedepends_on = [aws_s3_bucket.frontend_bucket]
}resource "aws_s3_bucket_policy" "frontend_policy" {
bucket = aws_s3_bucket.frontend_bucket.idpolicy = jsonencode({
Version = "2012-10-17",
Statement = [
{
Effect = "Allow",
Principal = "*",
Action = "s3:GetObject",
Resource = "${aws_s3_bucket.frontend_bucket.arn}/*"
}
]
})depends_on = [aws_s3_bucket_public_access_block.frontend_public_access]
}# ==============================================================================
# Optional Output
# ==============================================================================
output "frontend_website_url" {
value = aws_s3_bucket_website_configuration.frontend_website.website_endpoint
}
```This file will output the API Gateway URL and the S3 website endpoint after Terraform has finished deploying the resources.
terraform/outputs.tf
```tf
output "api_gateway_url" {
description = "The invoke URL of the deployed API"
value = aws_api_gateway_stage.stage.invoke_url
}output "lambda_function_name" {
description = "The name of the Lambda function"
value = aws_lambda_function.image_analyzer_lambda.function_name
}output "frontend_bucket_name" {
description = "The name of the S3 bucket hosting the frontend"
value = aws_s3_bucket.frontend_bucket.bucket
}output "frontend_website_endpoint" {
description = "The website endpoint of the frontend S3 bucket"
value = aws_s3_bucket_website_configuration.frontend_website.website_endpoint
}
```## β‘οΈ Step 4 - Frontend Development
Now we'll create the user interface that interacts with our backend.
This is the main HTML file for our application.
frontend/index.html
```html
AI Image Analyzer
AI-Powered Image Analyzer
Upload an image to detect labels with AWS Rekognition and get a description from Amazon Bedrock.
Click to select an image
Analyze Image
![]()
Analysis Results
AI Generated Description:
Detected Labels:
Built with AWS Rekognition, Bedrock & Terraform
```
Here is some CSS to make the interface look professional and modern.
frontend/style.css
```css
@import url('https://fonts.googleapis.com/css2?family=Roboto:wght@300;400;700&display=swap');body {
font-family: 'Roboto', sans-serif;
background-color: #f0f2f5;
color: #333;
margin: 0;
padding: 20px;
display: flex;
justify-content: center;
align-items: center;
min-height: 100vh;
}.container {
width: 100%;
max-width: 800px;
background-color: #ffffff;
border-radius: 12px;
box-shadow: 0 4px 20px rgba(0, 0, 0, 0.1);
padding: 30px;
box-sizing: border-box;
}header {
text-align: center;
border-bottom: 1px solid #e0e0e0;
padding-bottom: 20px;
margin-bottom: 30px;
}header h1 {
color: #1a73e8;
margin: 0;
}.upload-area {
text-align: center;
margin-bottom: 30px;
}#imageUpload {
display: none;
}#uploadLabel {
display: block;
padding: 30px;
border: 2px dashed #1a73e8;
border-radius: 8px;
cursor: pointer;
background-color: #f8f9fa;
margin-bottom: 20px;
transition: background-color 0.3s;
}#uploadLabel:hover {
background-color: #e8f0fe;
}#uploadLabel span {
font-size: 1.2em;
font-weight: 500;
}#analyzeBtn {
background-color: #1a73e8;
color: white;
padding: 12px 25px;
border: none;
border-radius: 8px;
font-size: 1em;
cursor: pointer;
transition: background-color 0.3s, box-shadow 0.3s;
box-shadow: 0 2px 5px rgba(0,0,0,0.1);
}#analyzeBtn:disabled {
background-color: #cccccc;
cursor: not-allowed;
}#analyzeBtn:not(:disabled):hover {
background-color: #155ab6;
box-shadow: 0 4px 10px rgba(0,0,0,0.2);
}#preview {
text-align: center;
margin-bottom: 30px;
}#imagePreview {
max-width: 100%;
max-height: 400px;
border-radius: 8px;
display: none;
box-shadow: 0 4px 15px rgba(0, 0, 0, 0.1);
}#results {
background-color: #f8f9fa;
border-radius: 8px;
padding: 20px;
}#results.hidden {
display: none;
}#resultContent {
display: none;
}#description {
font-size: 1.1em;
line-height: 1.6;
margin-bottom: 20px;
font-style: italic;
color: #555;
}#labels {
display: flex;
flex-wrap: wrap;
gap: 10px;
}.label-tag {
background-color: #e8f0fe;
color: #1a73e8;
padding: 8px 15px;
border-radius: 20px;
font-size: 0.9em;
font-weight: 500;
}.loader {
border: 4px solid #f3f3f3;
border-top: 4px solid #1a73e8;
border-radius: 50%;
width: 40px;
height: 40px;
animation: spin 1s linear infinite;
margin: 20px auto;
}@keyframes spin {
0% { transform: rotate(0deg); }
100% { transform: rotate(360deg); }
}footer {
text-align: center;
margin-top: 30px;
padding-top: 20px;
border-top: 1px solid #e0e0e0;
font-size: 0.9em;
color: #888;
}
```This JavaScript file handles the logic for image preview, converting the image to base64, calling the API, and displaying the results.
frontend/script.js
```js
document.addEventListener('DOMContentLoaded', () => {
const imageUpload = document.getElementById('imageUpload');
const uploadLabel = document.getElementById('uploadLabel');
const analyzeBtn = document.getElementById('analyzeBtn');
const imagePreview = document.getElementById('imagePreview');
const previewContainer = document.getElementById('preview');
const resultsContainer = document.getElementById('results');
const loader = document.getElementById('loader');
const resultContent = document.getElementById('resultContent');
const descriptionEl = document.getElementById('description');
const labelsEl = document.getElementById('labels');const API_ENDPOINT = 'YOUR_API_GATEWAY_INVOKE_URL'; // <-- IMPORTANT: REPLACE THIS
let base64Image = null;
imageUpload.addEventListener('change', (event) => {
const file = event.target.files[0];
if (file) {
// Display image preview
const reader = new FileReader();
reader.onload = (e) => {
imagePreview.src = e.target.result;
imagePreview.style.display = 'block';
uploadLabel.querySelector('span').textContent = file.name;
analyzeBtn.disabled = false;
};
reader.readAsDataURL(file);// Convert image to base64 for sending to API
const readerForBase64 = new FileReader();
readerForBase64.onload = (e) => {
// Remove the data URL prefix (e.g., "data:image/jpeg;base64,")
base64Image = e.target.result.split(',')[1];
};
readerForBase64.readAsDataURL(file);
}
});analyzeBtn.addEventListener('click', async () => {
if (!base64Image || API_ENDPOINT === 'YOUR_API_GATEWAY_INVOKE_URL') {
alert('Please select an image first or configure the API endpoint in script.js.');
return;
}// Show loader and results section
resultsContainer.classList.remove('hidden');
loader.style.display = 'block';
resultContent.style.display = 'none';
descriptionEl.textContent = '';
labelsEl.innerHTML = '';try {
const response = await fetch(API_ENDPOINT, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({ image: base64Image }),
});if (!response.ok) {
const errorData = await response.json();
throw new Error(errorData.error || `HTTP error! status: ${response.status}`);
}const data = await response.json();
// Display results
descriptionEl.textContent = data.description;
data.labels.forEach(label => {
const labelTag = document.createElement('div');
labelTag.className = 'label-tag';
labelTag.textContent = label;
labelsEl.appendChild(labelTag);
});} catch (error) {
console.error('Error:', error);
descriptionEl.textContent = `An error occurred: ${error.message}`;
} finally {
// Hide loader and show content
loader.style.display = 'none';
resultContent.style.display = 'block';
}
});
});
```β οΈImportant: You will need to replace `YOUR_API_GATEWAY_INVOKE_URL` with the actual URL you get from the Terraform output.
## β‘οΈ Step 5 - Deployment and Testing
Now it's time to bring everything online.
### 1. Deploy the Backend with Terraform
β’ Navigate to the `terraform` directory in your terminal:
```bash
cd ai-image-recognition-terraform/terraform
```β’ Initialize Terraform. This will download the necessary provider plugins.
```bash
terraform init
```β’ (Optional) Plan the deployment. This shows you what resources Terraform will create.
```bash
terraform plan
```β’ Apply the configuration to create the AWS resources. Type `yes` when prompted.
```bash
terraform apply
```β’ After the deployment is complete, Terraform will display the outputs. Copy the ``api_gateway_invoke_url`
### 2. Configure and Deploy the Frontend
β’ Open `frontend/script.js` in your text editor.
β’ Replace the placeholder `YOUR_API_GATEWAY_INVOKE_URL` with the URL you copied from the Terraform output.
β’ Now, upload the frontend files (`index.html`, `style.css`, and the updated `script.js`) to the S3 bucket created by Terraform. You can do this via the AWS Management Console or using the AWS CLI.β Find your bucket name in the S3 console (it will be prefixed with `ai-image-analyzer-frontend-hosting-`).
β Upload the three files from your `frontend` directory into the bucket.
β Ensure the files have public read access. Terraform attempts to set this, but you may need to confirm.### 3. Test the Application
1. Open the frontend_website_endpoint URL from the Terraform output in your web browser.
2. You should see the "AI Image Analyzer" interface.
3. Click the upload area, select a JPG or PNG image from your computer.
4. The image preview will appear, and the "Analyze Image" button will be enabled.
5. Click the button. The loader will appear while the backend processes the image.
6. After a few moments, the AI-generated description and the list of detected labels will be displayed.## ποΈ Cleaning Up
When you are finished with the project, you can destroy all the created AWS resources to avoid incurring further costs.
1. Navigate back to the terraform directory.
2. Run the destroy command:```bash
terraform destroy
```