Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/durlachert/delta-lake-optimization
BA 2
https://github.com/durlachert/delta-lake-optimization
apache-hive apache-spark big-data delta-lake hdfs jupyter-notebook
Last synced: about 1 month ago
JSON representation
BA 2
- Host: GitHub
- URL: https://github.com/durlachert/delta-lake-optimization
- Owner: durlachert
- Created: 2024-08-06T11:44:54.000Z (4 months ago)
- Default Branch: main
- Last Pushed: 2024-08-10T11:52:37.000Z (3 months ago)
- Last Synced: 2024-09-29T07:02:09.642Z (about 2 months ago)
- Topics: apache-hive, apache-spark, big-data, delta-lake, hdfs, jupyter-notebook
- Language: Jupyter Notebook
- Homepage:
- Size: 8.33 MB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Optimizing Delta Lake Lakehouse Tables for Improved Performance and Scalability
**Author:** Thomas Durlacher
**Date:** 10 August 2024## 1. Introduction
The project conducted an investigation of different Delta Lake optimization methods. The evaluation took place on a virtual Apache Spark cluster, providing insights into their efficiency, scalability, and compatibility within a distributed big data processing environment. To make the performance measurements comparable, one dataset was generated to create tables.
## 2. Repository
GitHub Repository: [https://github.com/durlachert/delta-lake-optimization/](https://github.com/durlachert/delta-lake-optimization)
## 3. Objectives
- Evaluate the performance impact of optimization techniques.
- Assess the scalability of the solutions as the size of the dataset increases.
- Analyze the query performance on complex analytical workloads.
- Identify any notable advantages or limitations of each solution in the context of the project requirements.## 4. Tools and Technologies
- Delta Lake
- Apache Spark
- Pyspark
- Scala
- Apache Kafka
- Apache Hive
- MySql
- HDFS
- Ubuntu
- Jupyter Notebook
- Apache Toree## 5. Methodology
### Dataset Preparation:
Generate synthetic datasets of varying sizes to simulate real-world big data scenarios.
### Cluster Setup:
Deploy an Apache Spark and necessary dependencies for Delta Lake.
### Data Ingestion:
Load datasets into tables using Delta Lake format.
### Performance Metrics:
- Measure the time taken for read and write operations.
- Evaluate the scalability by gradually increasing the dataset size.
- Execute complex analytical queries and measure query performance.### Observations and Analysis:
Document any challenges encountered during setup and configuration. Compare and contrast the performance metrics obtained.
## 6. Expected Outcomes
- A detailed report highlighting the strengths and weaknesses of Delta Lake optimization methods.
- Insights into the performance characteristics of both solutions under varying workloads and dataset sizes.
- Recommendations for selecting the appropriate solution based on specific use cases.
- Description and visual representation of different performance measurements.