Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/narius2030/molisa-data-warehouse-integration
Extract data from many databases of Labor, Invalids and Social Affairs sectors and convert to appropriate structure and format, then upload to shared data warehouse and data mart. Thanks to that, people of state agencies can easily retrieve and analyze data based on the compiled data warehouse.
https://github.com/narius2030/molisa-data-warehouse-integration
apache-airflow apache-spark api-rest data-pipeline data-warehousing medallion-architecture postgresql
Last synced: 3 months ago
JSON representation
Extract data from many databases of Labor, Invalids and Social Affairs sectors and convert to appropriate structure and format, then upload to shared data warehouse and data mart. Thanks to that, people of state agencies can easily retrieve and analyze data based on the compiled data warehouse.
- Host: GitHub
- URL: https://github.com/narius2030/molisa-data-warehouse-integration
- Owner: Narius2030
- Created: 2024-08-04T04:27:58.000Z (5 months ago)
- Default Branch: main
- Last Pushed: 2024-09-05T15:33:11.000Z (4 months ago)
- Last Synced: 2024-09-28T07:02:15.188Z (3 months ago)
- Topics: apache-airflow, apache-spark, api-rest, data-pipeline, data-warehousing, medallion-architecture, postgresql
- Language: PLpgSQL
- Homepage:
- Size: 8.9 MB
- Stars: 3
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Integration Strategy
![image](https://github.com/user-attachments/assets/4d76bfb0-aff3-4520-9972-8ab6ce76008e)
**Description:** this data warehouse was designed follow `Inmon approach` that integrated all of data into a single warehouse and it created several data marts associating sectors in government system
- **Data Source:** Multi-databases from different systems in governmental sector
- **Medallion Architecture:** Refining data across layers that has the goal of improving the structure and quality of data for better insights and analysis - `bronze -> silver -> gold`
- **Staging Area:** Ensuring independence between source database and data warehouse when performing transformations and aggergrates# Data Pipline Automation
All of the step in this project was design to a data pipeline which can be automated to load raw data from source that then go in medallion procedure for ensuring the quality of information. Finally, it was passed into warehouse and data marts.
- **Scheduler:** leveraging Apache Airflow to automate end-to-end integration process
- **Transformation:** using Apache Spark engine which was Pyspark package in Python to process and aggregate information
- **Environment:** this process was deployed on Docker containers including *Database Server* and *Airflow*### Docker setup
Dockerfile for Airflow and Spark
```dockerfile
FROM apache/airflow:2.9.1-python3.11USER root
# Install OpenJDK-17
RUN apt update && \
apt-get install -y openjdk-17-jdk && \
apt-get install -y ant && \
apt-get clean;# Set JAVA_HOME
ENV JAVA_HOME /usr/lib/jvm/java-17-openjdk-amd64/
RUN export JAVA_HOMEUSER airflow
# Sync files from local to Docker image
COPY ./airflow/dags /opt/airflow/dags
COPY requirements.txt .# Pyspark package
RUN pip install --no-cache-dir -r requirements.txt
RUN rm requirements.txt
```DAGs of data warehouse integration
![image](https://github.com/user-attachments/assets/91cd725b-35f7-49f8-a173-f086a9024a22)
DAGs of Resident data mart integration
![image](https://github.com/user-attachments/assets/517eaa0d-5013-4325-9b48-f9a574010f26)
DAGs of Time and Location integration
![image](https://github.com/user-attachments/assets/3a854528-70d3-4dbf-8961-c6eae03502b4)