Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/jgperrin/net.jgp.books.spark.ch02
Spark in Action, 2nd edition - chapter 2
https://github.com/jgperrin/net.jgp.books.spark.ch02
apache-spark java java8 manning spark sparkwithjava
Last synced: 3 months ago
JSON representation
Spark in Action, 2nd edition - chapter 2
- Host: GitHub
- URL: https://github.com/jgperrin/net.jgp.books.spark.ch02
- Owner: jgperrin
- License: apache-2.0
- Created: 2018-01-18T00:02:04.000Z (about 7 years ago)
- Default Branch: master
- Last Pushed: 2022-09-08T00:07:02.000Z (over 2 years ago)
- Last Synced: 2023-02-26T13:42:11.417Z (almost 2 years ago)
- Topics: apache-spark, java, java8, manning, spark, sparkwithjava
- Language: Java
- Homepage: http://jgp.net/sia
- Size: 73.2 KB
- Stars: 21
- Watchers: 4
- Forks: 30
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
This repository contains the Java labs as well as their Scala and Python ports of the code used in Manning Publication’s **[Spark in Action, 2nd edition](https://www.manning.com/books/spark-in-action-second-edition?a_aid=jgp)**, by Jean-Georges Perrin.
---
# Spark in Action, 2nd edition - chapter 2
Welcome to Spark in Action, 2nd edition, chapter 2. As we are building our mental model around Spark processing, we need a small but efficient example that does a whole flow (or data pipeline).
This code is designed to work with Apache Spark v3.1.2.
## Lab
Each chapter has one or more labs. Labs are examples used for teaching in the [book](https://www.manning.com/books/spark-in-action-second-edition?a_aid=jgp). You are encouraged to take ownership of the code and modify it, experiment with it, hence the use of the term **lab**.
### Lab \#100
The `CsvToDatabaseApp` application does the following:
1. The app acquires a session (a `SparkSession`).
1. It then asks Spark to load (ingest) a dataset in CSV format.
1. Spark performs a small transformation.
1. Spark stores the result in a database, in this example, it is PostgreSQL.To create the PostgreSQL database, you can use the following SQL commands:
CREATE ROLE jgp WITH
LOGIN
NOSUPERUSER
NOCREATEDB
NOCREATEROLE
INHERIT
NOREPLICATION
CONNECTION LIMIT -1
PASSWORD 'Spark<3Java';
CREATE DATABASE spark_labs
WITH
OWNER = jgp
ENCODING = 'UTF8'
CONNECTION LIMIT = -1;### Lab \#110
The `CsvToApacheDerbyApp` application is similar to lab \#100, however, instead of using PostgreSQL, it uses [Apache Derby](https://db.apache.org/derby/).
Special thanks to Kelvin Rawls for porting the code to Derby.
> Apache Derby has a bit of a special place in my heart. Originally named Cloudscape, the eponym company was acquired by [Informix](https://en.wikipedia.org/wiki/IBM_Informix) in 1999, which was itself acquired by [IBM](https://en.wikipedia.org/wiki/IBM) in 2001. Informix's project Arrowhead had the ambition to align the SQL (and other features) between all their databases. Under the new ownership, Cloudscape was influenced by the Db2 SQL before going to the Open Source community as Apache Derby.
## Running the lab in Java
For information on running the Java lab, see chapter 2 in [Spark in Action, 2nd edition](http://jgp.net/sia).
## Running the lab using PySpark
Prerequisites:
You will need:
* `git`.
* Apache Spark (please refer Appendix P - 'Spark in production: installation and a few tips').1. Clone this project
git clone https://github.com/jgperrin/net.jgp.books.spark.ch02
2. Go to the lab in the Python directory
cd net.jgp.books.spark.ch02/src/main/python/lab100_csv_to_db/
3. Execute the following spark-submit command to create a jar file to our this application
spark-submit csvToRelationalDatabaseApp.py
## Running the lab in ScalaPrerequisites:
You will need:
* `git`.
* Apache Spark (please refer Appendix P - 'Spark in production: installation and a few tips').1. Clone this project
git clone https://github.com/jgperrin/net.jgp.books.spark.ch02
2. Go to the correct directory:
cd net.jgp.books.spark.ch02
3. Package application using sbt command
sbt clean assembly
4. Run Spark/Scala application using spark-submit command as shown below:
spark-submit --class net.jgp.books.spark.ch02.lab100_csv_to_db.CsvToRelationalDatabaseScalaApp target/scala-2.12/SparkInAction2-Chapter02-assembly-1.0.0.jar
## News
1. [2020-06-07] Updated the pom.xml to support Apache Spark v3.1.2.
1. [2020-06-07] As we celebrate the first anniversary of Spark in Action, 2nd edition is the best-rated Apache Spark book on [Amazon](https://amzn.to/2TPnmOv).## Notes:
1. [Java] Due to renaming the packages to match more closely Java standards, this project is not in sync with the book's MEAP prior to v10 (published in April 2019).
1. [Scala, Python] As of MEAP v14, we have introduced Scala and Python examples (published in October 2019).
1. The master branch contains the last version of the code running against the latest supported version of Apache Spark. Look in specifics branches for specific versions.
---Follow me on Twitter to get updates about the book and Apache Spark: [@jgperrin](https://twitter.com/jgperrin). Join the book's community on [Facebook](https://facebook.com/sparkinaction/) or in [Manning's live site](https://forums.manning.com/forums/spark-in-action-second-edition?a_aid=jgp).