https://github.com/maprihoda/data-analysis-with-python-and-pyspark
https://github.com/maprihoda/data-analysis-with-python-and-pyspark
apache-spark data-analysis data-science data-wrangling dataframe-api machine-learning pyspark python sql
Last synced: 11 days ago
JSON representation
- Host: GitHub
- URL: https://github.com/maprihoda/data-analysis-with-python-and-pyspark
- Owner: maprihoda
- Created: 2020-12-12T17:20:55.000Z (over 4 years ago)
- Default Branch: master
- Last Pushed: 2020-12-21T20:57:36.000Z (over 4 years ago)
- Last Synced: 2025-03-30T12:05:43.526Z (about 1 month ago)
- Topics: apache-spark, data-analysis, data-science, data-wrangling, dataframe-api, machine-learning, pyspark, python, sql
- Language: Python
- Homepage:
- Size: 6.87 MB
- Stars: 22
- Watchers: 1
- Forks: 12
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
Learning Pyspark locally (i.e. without using any cloud service) via following the excellent [Data Analysis with Python and PySpark](https://www.manning.com/books/data-analysis-with-python-and-pyspark) by Jonathan Rioux.
## Environment setup
From the project root, run:
```bash
pipenv install
```This will create a virtual environment with all the required dependencies installed.
Although only ```pipenv``` is required for this setup to run, I strong recommend having both ```pyenv``` and ```pipenv``` installed. ```pyenv``` manages Python versions while ```pipenv``` takes care of virtual environments.
If you're on Windows, try [pyenv-win](https://github.com/pyenv-win/pyenv-win). ```pipenv``` should work just fine.
The notebooks were created with Visual Studio Code's [Jupyter code cells](https://code.visualstudio.com/docs/python/jupyter-support-py#_jupyter-code-cells), which I prefer over standard Jupyter notebooks/labs because of much better git integration.
You can easily convert the code cells files into Jupyter notebooks with Visual Studio Code. Just open a file, right click and select ```Export current Python file as Jupyter notebook```.
The ```data``` directory contains only the smaller-sized data files. You will have to download the larger ones as per the instructions in the individual notebooks, e.g.:
```bash
home_dir = os.environ["HOME"]
DATA_DIRECTORY = os.path.join(home_dir, "Documents", "spark", "data", "backblaze")
```This works on my Linux machine. You may need to modify the path if you're on Windows.