Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/georgegkonis/spark-decentralized-query-processing
Project for the academic course "Decentralized Data Technologies"
https://github.com/georgegkonis/spark-decentralized-query-processing
big-data decentralized-data jupyter python query-optimization spark
Last synced: 17 days ago
JSON representation
Project for the academic course "Decentralized Data Technologies"
- Host: GitHub
- URL: https://github.com/georgegkonis/spark-decentralized-query-processing
- Owner: georgegkonis
- License: mit
- Created: 2024-09-22T10:39:31.000Z (4 months ago)
- Default Branch: master
- Last Pushed: 2024-10-25T09:16:25.000Z (2 months ago)
- Last Synced: 2024-10-26T05:50:09.402Z (2 months ago)
- Topics: big-data, decentralized-data, jupyter, python, query-optimization, spark
- Language: Jupyter Notebook
- Homepage: https://www.ceid.upatras.gr/en/course/decentralized-data-technologies/
- Size: 245 KB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Spark Decentralized Query Processing & Optimization
This project is part of the [Decentralized Data Technologies](https://www.ceid.upatras.gr/en/course/decentralized-data-technologies/) course offered by
the [Department of Computer Engineering & Informatics](https://www.ceid.upatras.gr/en/) at
the [University of Patras](https://www.upatras.gr/en/). The objective of the project is to design and implement a
decentralized query processing and optimization system built on top of Apache Spark. The system will efficiently
distribute query plans across multiple nodes, enabling parallel execution and improving performance in a decentralized
manner. Additionally, the system will include query optimization capabilities by pushing down filters and projections to
data sources, thereby minimizing data transfer and enhancing overall query efficiency.## Installation
To set up the environment for this project, follow these steps:
1. Ensure you have `conda` installed. If not, you can download and install it
from [here](https://docs.conda.io/projects/conda/en/latest/user-guide/install/index.html).2. Navigate to the project directory.
3. Create the environment using the `environment.yml` file:
```sh
conda env create -f environment.yml
```4. Activate the environment:
```sh
conda activate spark-env
```5. Start Jupyter Notebook:
```sh
jupyter notebook
```You should now be able to run the notebooks and scripts in this project.
## Contributing
Due to the academic nature of this project, contributions are not accepted.
## License
This project is licensed under the MIT License - see the `LICENSE` file for details.
## Authors
- [George Gkonis](https://github.com/georgegkonis)