Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/themihirmathur/uber-data-analytics
The goal of this project is to perform comprehensive data analytics on Uber trip data using a modern data engineering stack on Google Cloud Platform (GCP).
https://github.com/themihirmathur/uber-data-analytics
bigquery data-analysis data-engineering etl-pipeline google-cloud-platform looker python
Last synced: about 1 month ago
JSON representation
The goal of this project is to perform comprehensive data analytics on Uber trip data using a modern data engineering stack on Google Cloud Platform (GCP).
- Host: GitHub
- URL: https://github.com/themihirmathur/uber-data-analytics
- Owner: themihirmathur
- Created: 2024-07-05T14:56:42.000Z (4 months ago)
- Default Branch: main
- Last Pushed: 2024-07-31T05:57:55.000Z (4 months ago)
- Last Synced: 2024-09-29T06:07:19.631Z (about 2 months ago)
- Topics: bigquery, data-analysis, data-engineering, etl-pipeline, google-cloud-platform, looker, python
- Language: Jupyter Notebook
- Homepage: https://lookerstudio.google.com/reporting/ff3615b3-b633-4010-8420-6e0b88e8a060
- Size: 3.89 MB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Uber Data Analytics 🚕 | Modern Data Engineering GCP Project 📈
![00](https://github.com/themihirmathur/Uber-Data-Analytics/assets/92594107/5014bc9d-4477-4a5e-ab94-4f320f86f44c)
## Introduction
The goal of this project is to perform comprehensive data analytics on Uber trip data using a modern data engineering stack on Google Cloud Platform (GCP). This project leverages various GCP services and tools to extract, transform, and load (ETL) the data, and ultimately visualize the insights gained from the analysis. The key components of this project include GCP Storage, Python, Compute Instance, Mage Data Pipeline Tool, BigQuery, and Looker Studio.
## Architecture
![Screenshot 2024-07-05 at 8 30 36 PM](https://github.com/themihirmathur/Uber-Data-Analytics/assets/92594107/e1888bea-6382-4678-b22c-864bee15e0e3)
The architecture for this project follows a robust and scalable design:
1. **Data Ingestion**:
- **GCP Storage**: Raw data is stored in Google Cloud Storage, providing a durable and highly available repository for data.
2. **Data Processing**:
- **Compute Instance**: Python scripts running on GCP Compute Instances to clean and preprocess the data. Compute Instances offer flexible, scalable virtual machine resources for various workloads.
- **Mage Data Pipeline Tool**: Used to automate the ETL processes and manage data workflows. Mage simplifies building, running, and monitoring data pipelines.3. **Data Warehousing**:
- **BigQuery**: Processed data is stored in BigQuery for efficient querying and analysis. BigQuery is a fully-managed, serverless data warehouse that enables super-fast SQL queries using the processing power of Google's infrastructure.![Screenshot 2024-07-28 at 2 09 09 AM](https://github.com/user-attachments/assets/c1a0259e-b4e6-493c-a626-9b24d2b642ee)
4. **Data Visualization**:
- **Looker Studio**: Dashboards and reports are created in Looker Studio to visualize the insights from the data analysis. Looker Studio allows creating interactive, sharable reports and dashboards with Google Analytics data.
## Technology Used
- **Programming Language**: Python
- Python is used for data processing and analysis due to its simplicity and the vast array of libraries available for data manipulation, such as Pandas, NumPy, and Scikit-learn.
- **Google Cloud Platform**:
- **Google Storage**: A scalable, secure, and highly available object storage service.
- **Compute Instance**: Virtual machine instances that provide the compute capacity for running data processing scripts.
- **BigQuery**: A managed data warehouse for storing and querying large datasets efficiently.
- **Looker Studio**: A business intelligence tool for creating data visualizations and dashboards.
- **Modern Data Pipeline Tool**: Mage ([Mage](https://www.mage.ai/))
- Mage is a modern data pipeline tool that allows for easy ETL workflows and automation of data processing tasks.
## Dataset Used
The dataset used for this project is the TLC Trip Record Data, which includes yellow and green taxi trip records. These records capture various fields such as:
- **Pick-up and drop-off dates/times**: The start and end times of each trip.
- **Pick-up and drop-off locations**: Latitude and longitude coordinates of trip origins and destinations.
- **Trip distances**: The distance traveled during each trip.
- **Itemized fares**: Breakdown of fare components including base fare, taxes, and surcharges.
- **Rate types**: Fare rate categories such as standard, flat rate, or negotiated.
- **Payment types**: Methods of payment used for the trip, such as credit card, cash, or no charge.
- **Driver-reported passenger counts**: Number of passengers reported by the driver for each trip.**Dataset Source**: [Uber Data CSV](https://github.com/themihirmathur/Uber-Data-Analytics/tree/main/data)
**More Information**:
- **Website**: [TLC Trip Record Data](https://www.nyc.gov/site/tlc/about/tlc-trip-record-data.page)
- **Data Dictionary**: [TLC Data Dictionary](https://www.nyc.gov/assets/tlc/downloads/pdf/data_dictionary_trip_records_yellow.pdf)
## Data Model
![00](https://github.com/themihirmathur/Uber-Data-Analytics/assets/92594107/b805b069-8294-4329-bcf9-21b115d93bfa)
The data model for this project is designed to support efficient querying and analysis. It includes the following tables:
- **Trip Data**: Contains detailed records of each trip including timestamps, locations, distances, fares, and passenger counts.
- **Location Data**: Contains metadata about the pick-up and drop-off locations.
- **Fare Data**: Contains itemized fare details for each trip.
## ETL Process
![Screenshot 2024-07-28 at 1 17 42 AM](https://github.com/user-attachments/assets/f48bb7a5-b21d-4309-b49c-437eb12cb02f)
1. **Extraction**: Raw trip data is extracted from the source and loaded into GCP Storage. This ensures the data is durably stored and easily accessible for further processing.
2. **Transformation**: Data cleaning and preprocessing are performed using Python scripts running on GCP Compute Instances. The Mage Data Pipeline Tool is used to orchestrate and manage these processes. Transformations include data normalization, filtering out erroneous entries, and enriching the data with additional context.
3. **Loading**: The cleaned and transformed data is loaded into BigQuery for storage and querying. BigQuery's powerful SQL engine allows for fast and efficient analysis of the data.
## Data Analysis
The analysis focuses on uncovering insights such as:
- **Peak hours for trips**: Identifying the busiest times of the day for taxi rides.
- **Most popular pick-up and drop-off locations**: Determining the locations with the highest number of trips.
- **Average trip distances and fares**: Calculating the typical distance and cost of a trip.
- **Payment method distribution**: Analyzing the proportion of different payment methods used.
- **Passenger count trends**: Examining the distribution of passenger counts across trips.
## Visualization
The results of the data analysis are visualized using Looker Studio, providing interactive dashboards and reports that stakeholders can use to gain actionable insights. These visualizations help in understanding patterns and trends in the data, making it easier to make data-driven decisions.
[Uber_Data_Analytics_by_Mihir.pdf](https://github.com/user-attachments/files/16401664/Uber_Data_Analytics_by_Mihir.pdf)
![Screenshot 2024-07-28 at 4 26 02 AM](https://github.com/user-attachments/assets/68379c6b-70c3-4332-9821-7beaa38d05a9)
![Screenshot 2024-07-28 at 4 26 22 AM](https://github.com/user-attachments/assets/26ffd121-c4b7-439f-8ef9-2b155038d1b4)
![Screenshot 2024-07-28 at 4 26 37 AM](https://github.com/user-attachments/assets/2eb00bcb-1d1b-48a9-ace3-ca84fa1e2f2e)
## Conclusion
This project demonstrates the use of modern data engineering practices and tools to perform data analytics on Uber trip data. By leveraging GCP services and the Mage Data Pipeline Tool, the project achieves a scalable and efficient ETL process, enabling comprehensive analysis and visualization of the data.
---
Feel free to contact me for any questions or further information.
**Contacts**
Email: [[email protected]](mailto:[email protected])
LinkedIn: [linkedin.com/in/mihirmathur](https://www.linkedin.com/in/mihirmathur)
GitHub: [github.com/themihirmathur](https://github.com/themihirmathur)