Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/alinski29/spark-zoo
Collection of Spark utilities to solve recurrent problems
https://github.com/alinski29/spark-zoo
Last synced: about 1 month ago
JSON representation
Collection of Spark utilities to solve recurrent problems
- Host: GitHub
- URL: https://github.com/alinski29/spark-zoo
- Owner: alinski29
- Created: 2022-03-16T08:17:47.000Z (almost 3 years ago)
- Default Branch: main
- Last Pushed: 2022-03-21T09:33:04.000Z (almost 3 years ago)
- Last Synced: 2023-05-19T13:43:19.329Z (over 1 year ago)
- Language: Scala
- Size: 19.5 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# spark-zoo
Collection of Spark utilities to solve recurrent problemsTable of contents
---
- [Usage](#Usage)
- [Installation](#instlation)
- [Functionality](#functionality)
- [DataFrame](#dataframe-methods)
- [DeltaTable](#deltatable-methods)
- [Helpers](#general-helpers)
- [Supported spark versions](#supported-spark-versions)
---## **Instlation**
Currently you have to build it from source using sbt. Should be soon available on a public repository.
```bash
sbt clean assembly
```## **Usage**
- For `DataFrame` functions, import the following, which will include more methods to the API.
```scala
import com.github.alinski.spark.zoo.DataFrameHelpers._
```
- For `DeltaTable`, accordingly:
```scala
import com.github.alinski.spark.zoo.DeltaTableHelpers._
```## **Functionality**
- ### `DataFrame` methods
- .getMinMax(cols: Seq[String]): DataFrame
- .getMinMax(groups: Seq[String], cols: Seq[String]): DataFrame
- .appendMinMax(cols: Seq[String]): DataFrame
- .appendMinMax(groups: Seq[String], cols: Seq[String]): DataFrame
- .hasColumns(cols: Seq[String]): Boolean
- .enforceSchema(schema: StructType): DataFrame
- .getPath: Option[String]
- .hasPartitions: Boolean
- .getPartitionFileCounts: Map[String, Int]
- .applyKafkaSchema(keySchema: Option[StructType] = None, valueSchema: Option[StructType] = None): DataFrame
- .repartition(columns: Seq[String], maxPartitionRecords: Int): DataFrame
- .addMetaFields: DataFrame- ### `DeltaTable` methods
- .upsert(updates: DataFrame, predicate: String, ...): Unit
- .compact(maxFiles: Long, optimalFiles: Option[Long]): Unit
- .generateManifest(): Unit
- .getOrCreate(path: String, updates, partitions, mode): DeltaTable
- .generateInsertExpr
- .generateUpdateExpr(updates, ignoreNulls = false, addMetaFields = false, ignoreIfSet = false)- ### General helpers
- .readSchemaFromFile(path: String): StructType
## **Supported Spark versions**
- 2.4.x
- 3.0.x
- 3.1.x