An open API service indexing awesome lists of open source software.

https://github.com/dadananjesha/azuredataengine

AzureDataEngine is a robust, scalable batch processing data architecture built on the Azure platform. It efficiently extracts, transforms, and loads massive datasets for machine learning applications, leveraging Azure Blob Storage, PostgreSQL, Databricks, and Key Vault to ensure reliability and maintainability.
https://github.com/dadananjesha/azuredataengine

azure batch-processing blob-storage databricks etl etl-framework key-vault postgresql-database spark vnet

Last synced: 8 months ago
JSON representation

AzureDataEngine is a robust, scalable batch processing data architecture built on the Azure platform. It efficiently extracts, transforms, and loads massive datasets for machine learning applications, leveraging Azure Blob Storage, PostgreSQL, Databricks, and Key Vault to ensure reliability and maintainability.

Awesome Lists containing this project

README

          

# Batch Processing Data Architecture ๐Ÿš€๐Ÿ“Š



Azure DevOps


Azure Repos


Azure Pipelines


Azure Portal


Azure Database for PostgreSQL


Azure Databricks


Azure Blob Storage


Azure Key Vault


Network Watcher


Network Security


Resource Group


Python

---

## ๐Ÿ“– Introduction

**Batch Processing Data Architecture** is a robust project that builds a scalable, dependable, and maintainable data processing backend on the Azure platform. Designed as the backbone for a machine learning application, it efficiently processes enormous amounts of data, performs necessary preprocessing, and aggregates it for downstream ML tasks.

The system leverages canonical software components and data engineering best practices to integrate multiple Azure services for a comprehensive solution.

![Project Architecture](https://github.com/DadaNanjesha/batch-processing/blob/main/Project%20structure.png)

---

## โœจ Key Features

- **Scalable Batch Processing:** Efficiently processes massive datasets in scheduled batches.
- **ETL Workflows:** Custom Python scripts for data extraction, transformation, and loading.
- **Azure Integration:** Leverages Blob Storage, PostgreSQL, Databricks, Key Vault, and more.
- **Modular Design:** Easy-to-maintain code structure with dedicated ETL and loading scripts.

---
## ๐Ÿ› ๏ธ Technologies Used

- **Azure DevOps**

- **Azure Repos**

- **Azure Pipelines**

- **Azure Portal**

- **Azure Database for PostgreSQL**

- **Azure Databricks**

- **Azure Blob Storage**

- **Azure Key Vault**

- **Network Watcher & Network Security**

- **Resource Group**

- **Python**

---

## ๐Ÿ”„ Flow Diagram

```mermaid
flowchart TD
A[๐Ÿ“„ CSV Data Source] --> B[๐Ÿ”„ ETL_batchdata.py]
B --> C[๐Ÿงน Data Transformation & Aggregation]
C --> D[๐Ÿ“ค loadtoblobtable.py]
D --> E[๐Ÿ’พ Storage :Azure Blob/PostgreSQL]
E --> F[๐Ÿ“ˆ Machine Learning Application]
```

---

## ๐Ÿ—‚๏ธ Project Structure

```plaintext
batch-processing/
โ”œโ”€โ”€ .gitignore # Git ignore file
โ”œโ”€โ”€ ETL_batchdata.py # Main ETL script for batch data processing
โ”œโ”€โ”€ loadtoblobtable.py # Script to load processed data into storage
โ”œโ”€โ”€ GoudaShanbog_DadaNanjesha_10220129_Data Engineering_Phase1.pdf # Phase 1 design document
โ”œโ”€โ”€ GoudaShanbog_DadaNanjesha_10220129_Data Engineering_Phase2.pdf # Phase 2 design document
โ”œโ”€โ”€ GoudaShanbog_DadaNanjesha_10220129_Data Engineering_Phase3.pdf # Phase 3 design document
โ”œโ”€โ”€ Project structure.png # Visual diagram of project architecture
โ””โ”€โ”€ output file.pdf # Sample output report from data aggregation
```

---

## ๐Ÿ’ป Setup Steps

Before getting started, ensure you have an active [Azure subscription](https://azure.microsoft.com/).

1. **Create Your Azure Environment:**
- Set up your Azure subscription and create a Resource Group.
- Provision necessary services such as Azure Blob Storage, PostgreSQL, Databricks, Key Vault, etc.

2. **Prepare Your Data:**
- Deploy your CSV data into the PostgreSQL database or Blob Storage as needed.

3. **Run the ETL Process:**
- Execute the `ETL_batchdata.py` script to extract, transform, and prepare your data.
- Run `loadtoblobtable.py` to load the processed data into your target storage.

4. **Integrate with ML Application:**
- Ensure your machine learning application can access the processed data from the designated storage.

---
## โญ๏ธ Support & Call-to-Action

If you find this project useful, please consider:
- **Starring** the repository โญ๏ธ
- **Forking** the project to contribute enhancements
- **Following** for updates on future improvements

Your engagement helps increase visibility and encourages further collaboration!

---
## ๐Ÿ“œ License

This project is licensed under the [MIT License](LICENSE).

---

## ๐Ÿ™ Acknowledgements

- **Azure Services:** For providing a robust, scalable infrastructure.
- **Data Engineering Principles:** Guiding our modular and reliable architecture.
- **Contributors:** Thank you to everyone who supported and contributed to this project.

---

*Happy Data Processing! ๐Ÿš€๐Ÿ“Š*