https://github.com/arshtiwari2004/sahyog
Centralized Disaster Response and Inventory Management System that leverages AI and Google Cloud Technologies to predict disasters, optimize resource management, and provide real-time coordination.
https://github.com/arshtiwari2004/sahyog
adam-optimizer apache-kafka apache-spark cnn-classification docker efficientnetb4 gan-models gcp-storage gcs gemini-api hyperledger-fabric keras-classification-models l2-regularization lstm-model numpy opencv tensorflow twilio vertex-ai yolov8-detection
Last synced: 21 days ago
JSON representation
Centralized Disaster Response and Inventory Management System that leverages AI and Google Cloud Technologies to predict disasters, optimize resource management, and provide real-time coordination.
- Host: GitHub
- URL: https://github.com/arshtiwari2004/sahyog
- Owner: ArshTiwari2004
- Created: 2025-03-20T18:38:45.000Z (2 months ago)
- Default Branch: main
- Last Pushed: 2025-05-10T12:13:56.000Z (24 days ago)
- Last Synced: 2025-05-13T01:48:56.893Z (21 days ago)
- Topics: adam-optimizer, apache-kafka, apache-spark, cnn-classification, docker, efficientnetb4, gan-models, gcp-storage, gcs, gemini-api, hyperledger-fabric, keras-classification-models, l2-regularization, lstm-model, numpy, opencv, tensorflow, twilio, vertex-ai, yolov8-detection
- Language: JavaScript
- Homepage: https://sahyog-project.vercel.app/
- Size: 14.2 MB
- Stars: 4
- Watchers: 1
- Forks: 3
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Sahyog: Centralized Disaster Response and Inventory Management System π
![]()
![]()
![]()
[](https://github.com/ArshTiwari2004/Sahyog/issues)
[](https://github.com/ArshTiwari2004/Sahyog/network)
[](https://github.com/ArshTiwari2004/Sahyog/stargazers)
[](https://github.com/ArshTiwari2004/Sahyog/graphs/contributors)
[](LICENSE)

---
## π Problem Statement
India faces significant challenges in disaster management, including:
- **Delayed Response**: Lack of real-time data to predict and respond to disasters.
- **Inefficient Resource Allocation**: Mismanagement of relief materials and response teams.
- **Poor Communication**: Ineffective coordination between authorities, responders, and citizens.
- **Lack of Predictive Insights**: Absence of AI-powered models to predict disasters and mitigate damage.In large-scale disasters like floods, earthquakes, and cyclones, these inefficiencies result in increased casualties, prolonged suffering, and immense economic loss.
---
## π‘ Solution Overview
**Sahyog** is an AI-powered disaster response platform designed to enhance preparedness, efficiency, and coordination in disaster management. It leverages **Google Cloud Technologies** and **AI models** to predict disasters, optimize resource management, and provide real-time coordination.
### Key Features:
- **Disaster Prediction and Early Warnings** using AI models deployed on Vertex AI.
- **Incident Reporting and Monitoring** through mobile and web apps using Gemini APIs for image, video, and text analysis.
- **Resource Inventory Management** with real-time tracking using RFID sensors and transparent logging through Hyperledger Fabric.
- **Task Allocation and Response Management** using AI algorithms to assign the nearest responders.
- **Multi-channel Communication** for real-time notifications using Twilio and Firebase Cloud Messaging.
- **Post-Disaster Analysis** to generate actionable insights for future disaster management.---
## π οΈ Tech Stack
### AI and Machine Learning
| **Technology** | **Purpose** |
|-----------------------|-----------------------------------------------------------------------------|
| **Vertex AI** | Training and deploying AI models (LSTMs, CNNs, GANs) for disaster prediction.|
| **Gemini APIs** | Multimodal analysis of images, videos, and text for damage assessment. |
| **OpenCV** | Image processing and damage detection from satellite and drone imagery. |### Backend
| **Technology** | **Purpose** |
|-----------------------|-----------------------------------------------------------------------------|
| **Node.js + Express** | Building RESTful APIs for the backend. |
| **FastAPI** | Serving AI models and handling AI-related requests. |
| **Apache Kafka** | Real-time data streaming from satellites, sensors, and drones. |
| **Apache Spark** | Large-scale data processing and post-disaster analysis. |### Frontend
| **Technology** | **Purpose** |
|-----------------------|-----------------------------------------------------------------------------|
| **React.js** | Building the web dashboard for real-time monitoring and visualization. |
| **Flutter** | Developing the mobile app for citizen reporting and alerts. |### Database
| **Technology** | **Purpose** |
|-----------------------|-----------------------------------------------------------------------------|
| **PostgreSQL** | Storing structured data like incident reports and resource details. |
| **MongoDB** | Storing unstructured data like images, videos, and sensor logs. |### Resource Management
| **Technology** | **Purpose** |
|-----------------------|-----------------------------------------------------------------------------|
| **RFID Sensors** | Real-time tracking of relief resources like medical kits and food supplies. |
| **Hyperledger Fabric**| Blockchain-based transparent logging of resource distribution. |### Communication
| **Technology** | **Purpose** |
|-----------------------|-----------------------------------------------------------------------------|
| **Twilio** | Sending SMS and voice alerts to citizens and authorities. |
| **Firebase Cloud Messaging** | Push notifications for real-time updates. |### Visualization
| **Technology** | **Purpose** |
|-----------------------|-----------------------------------------------------------------------------|
| **Google Maps API** | GIS-based visualizations for disaster-affected areas and resource allocation.|
| **Google Data Studio**| Generating real-time reports and insights for stakeholders. |### Infrastructure
| **Technology** | **Purpose** |
|-----------------------|-----------------------------------------------------------------------------|
| **Google Cloud Storage (GCS)** | Storing satellite images, sensor data, and AI models. |
| **Docker** | Containerization of backend and AI services. |
| **Kubernetes** | Orchestrating containerized applications for scalability. |
| **Terraform** | Infrastructure as Code (IaC) for managing cloud resources. |---
## π Workflow
1. **Data Collection**: Real-time data from satellites, sensors, and drones is streamed via **Apache Kafka** and stored in **Google Cloud Storage (GCS)**.
2. **Disaster Prediction**: **Vertex AI** trains and deploys AI models (LSTMs, CNNs, GANs) for disaster prediction using real-time data.
3. **Incident Reporting**: Citizens report incidents via a **Flutter app**, and **Gemini APIs** analyze images, text, and voice inputs for severity assessment.
4. **Decision Making**: AI models process predictions and reports to generate alerts using **Vertex AI** and **Gemini APIs**.
5. **Alert Generation**: Multilingual alerts are sent via **Twilio** (SMS/calls) and **Firebase Cloud Messaging** (push notifications).
6. **Resource Management**: Resources are tracked using **RFID sensors**, and **Hyperledger Fabric** ensures transparent distribution.
7. **Task Allocation**: AI optimizes task assignments using **Dijkstraβs Algorithm** and real-time data from **Google Maps API**.
8. **Post-Disaster Analysis**: Data is aggregated using **Apache Spark**, and **Vertex AI** generates insights for future preparedness.## Explaining the ai-service folder in detail
This AI service powers the core intelligence layer of the Sahyog disaster management system, providing:
- **Real-time disaster prediction** using satellite/drone/sensor data
- **Multimodal damage assessment** (images, text reports, sensor data)
- **Optimal resource allocation** during emergencies
- **Automated reporting & alerting**### π οΈ Technology Stack
| Component | Technology | Purpose |
|-----------------|-------------------------------------|----------------------------------|
| Core Framework | Python 3.9, FastAPI | API development |
| AI/ML | TensorFlow 2.12, Vertex AI | Model training/deployment |
| Multimodal AI | Gemini API | Text/image analysis |
| Cloud Services | GCP (Storage, BigQuery, PubSub) | Data pipeline |
| Optimization | OR-Tools | Resource allocation |
| Containerization| Docker, Gunicorn | Production deployment |### Prerequisites
- Google Cloud account with:
- Vertex AI enabled
- Service account with AI Platform Admin role
- Docker installed### Step-by-Step Setup
1. **Configure Environment Variables**
```bash
# Required Variables
export GCP_PROJECT_ID="your-project-id"
export GCP_REGION="us-central1"
export GEMINI_API_KEY="your-api-key"
export GCS_BUCKET="your-bucket-name"
# Optional Overrides
export DAMAGE_MODEL_ENDPOINT="projects/.../locations/.../endpoints/..."
export RESOURCE_MODEL_ENDPOINT="projects/.../locations/.../endpoints/..."
```
2. **Build Docker Image**```bash
docker build -t sahyog-ai-service \
--build-arg GCP_PROJECT_ID=$GCP_PROJECT_ID \
--build-arg GEMINI_API_KEY=$GEMINI_API_KEY \
-f ai-service/Dockerfile .
```3. **Run Container**
```bash
docker run -d \
-p 8000:8000 \
-e GCP_PROJECT_ID=$GCP_PROJECT_ID \
-e GEMINI_API_KEY=$GEMINI_API_KEY \
-e GCS_BUCKET=$GCS_BUCKET \
--name sahyog-ai \
sahyog-ai-service
```4. Verify Deployment
```bash
curl http://localhost:8000/
# Expected: {"status":"healthy","services":{"vertex":true,"gemini":true,"gcp":true}}
```**Ensure you have these enviornment variables**
| Variable | Required | Description |
|---------------------------|----------|------------------------------------------------------------------|
| GCP_PROJECT_ID | Yes | Your Google Cloud Platform project ID |
| GEMINI_API_KEY | Yes | API key for Google Gemini services |
| GCS_BUCKET | Yes | Default Google Cloud Storage bucket for disaster data |
| GCP_REGION | No | GCP region (default: us-central1) |
| DAMAGE_MODEL_ENDPOINT | No | Vertex AI endpoint for pre-trained damage assessment model |
| RFID_API_ENDPOINT | No | URL for RFID inventory tracking system |
| MAX_CONCURRENT_TASKS | No | Limits parallel processing (default: CPU cores * 2) |**Models used with architecture**
### **π Complete Model Inventory**
#### **1. Disaster Prediction Models**
| Model File | Architecture | Input Shape | Output | Pretrained Weights |
|--------------------|--------------------|------------------|---------------------------------|--------------------|
| `lstm_model.py` | Stacked LSTM | (24, 5) | Binary probability | No |
| `cnn_model.py` | EfficientNetB4 | (256, 256, 3) | 5-class disaster | ImageNet |
| `gan_model.py` | DCGAN | (100,) latent dim| (256, 256, 3) synthetic images | No |#### **2. Damage Assessment Models**
| Model File | Architecture | Input Shape | Output | Key Features |
|-----------------------|--------------|------------------|----------------------------|--------------------|
| `image_classifier.py` | EfficientNetB4 | (512, 512, 3) | 4 damage levels | Transfer learning |
| `object_detector.py` | YOLOv8x | (640, 640, 3) | BBox + 6 classes | COCO pretrained |#### **3. Resource Optimization Models**
| Model File | Architecture | Input Features | Output | Optimization Method |
|-----------------------|--------------------|-----------------------|-------------------------|---------------------|
| `predictive_model.py` | Hybrid LSTM-RF | 7 temporal features | 5 resource demands | Adam + Gini impurity|
| `allocation_model.py` | OR-Tools MIP | Demand constraints | Allocation plan | Linear Programming |## **Performace comparision for the models deployed**
| Endpoint | Avg Latency | Throughput (req/s) |
|--------------------------|-------------|---------------------|
| Disaster Prediction | 320ms | 45 |
| Damage Assessment | 480ms | 32 |
| Resource Optimization | 210ms | 68 |*Tested on: n1-standard-4 VM, 100 concurrent requests*
Note: This can vary , I have taken avg latency to get this
## π Installation
To set up the project locally, follow these steps:
1. **Clone the repository:**
```bash
git clone https://github.com/CodeCanvas/Sahyog.git
cd Sahyog### 2. Install dependencies
### For the backend
```bash
cd backend/api
npm install
```### For the AI service
```bash
cd ../ai-service
pip install -r requirements.txt
```### For the frontend
```bash
cd ../../frontend/web
npm install
```### 3. Set up environment variables
```bash
cp .env.example .env
```
Copy .env.example to .env and fill in the required values.### 4. Run the project
### Start the backend server
```bash
cd ../../backend/api
npm start
```### Start the AI service
```bash
cd ../ai-service
uvicorn main:app --reload
```### Start the frontend
```bash
cd ../../frontend/web
npm start
```### Implementing docker for Sahyog
#### To deploy:
Build the image:
```bash
docker build -t sahyog-ai-service
```Then run the command:
```bash
docker run -p 8000:8000 -e GCP_PROJECT_ID=your-project sahyog-ai-service
```## π Project Structure
```bash
Sahyog/
βββ backend/
β βββ api/ # Main API server (Node.js + Express)
β βββ ai-service/ # AI model service (Python + FastAPI)
β βββ data-pipeline/ # Data processing pipeline
β βββ blockchain/ # Hyperledger Fabric setup
β
βββ frontend/
β βββ web/ # React.js web application
β βββ mobile/ # Flutter mobile application
β
βββ infrastructure/
β βββ terraform/ # Infrastructure as Code
β βββ docker/ # Docker configurations
β βββ kubernetes/ # Kubernetes configs
β
βββ docs/ # Documentation
βββ scripts/ # Utility scripts
βββ .env.example # Example environment variables
βββ .gitignore # Git ignore file
βββ README.md # Project overview
```## π Diagrams
### Process Flow Diagram
![]()
### Use Case Diagram
![]()
### Architeture Diagram
![]()
## π€ Contributing
Contributions to project Sahyog are always welcome! Please follow these steps:1. Fork the repository.
2. Create a new branch (git checkout -b feature/YourFeatureName).
3. Commit your changes (git commit -m 'Add some feature').
4. Push to the branch (git push origin feature/YourFeatureName).
5. Open a pull request.
### π License
This project is licensed under the MIT License. See the LICENSE file for details.