Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/yaooqinn/itachi
A library that brings useful functions from various modern database management systems to Apache Spark
https://github.com/yaooqinn/itachi
hive postgres presto spark trino
Last synced: 3 months ago
JSON representation
A library that brings useful functions from various modern database management systems to Apache Spark
- Host: GitHub
- URL: https://github.com/yaooqinn/itachi
- Owner: yaooqinn
- License: apache-2.0
- Created: 2020-04-02T12:00:49.000Z (almost 5 years ago)
- Default Branch: main
- Last Pushed: 2023-09-04T12:20:14.000Z (over 1 year ago)
- Last Synced: 2024-10-02T10:06:15.821Z (4 months ago)
- Topics: hive, postgres, presto, spark, trino
- Language: Scala
- Homepage: https://itachi.readthedocs.io/
- Size: 167 KB
- Stars: 53
- Watchers: 5
- Forks: 4
- Open Issues: 1
-
Metadata Files:
- Readme: README.rst
- License: LICENSE
Awesome Lists containing this project
- awesome-spark - itachi - commit/yaooqinn/itachi.svg"> - A library that brings useful functions from modern database management systems to Apache Spark. (Packages / General Purpose Libraries)
README
itachi
======itachi brings useful functions from modern database management systems to Apache Spark :)
For example, you can import the Postgres extensions and write Spark code that looks just like Postgres.
The functions are implemented as native Spark functions, so they're performant.
In general, only those functions that difficult for the Apache Spark Community to maintain in the master branch will be added to this library.
Installation
------------Fetch the JAR file from Maven.
libraryDependencies += "com.github.yaooqinn" %% "itachi" % "0.1.0"
Here's `the Maven link `_ where the JAR files are stored.
itachi requires Spark 3+.
Simple function registration
--------------Access the Postgres / Teradata functions with these commands:::
org.apache.itachi.registerPostgresFunctions
org.apache.itachi.registerTeradataFunctionsSimple example
--------------Suppose you have the following data table and would like to join the two arrays, with the familiar `array_cat `_ function from Postgres.::
+------+------+
| arr1| arr2|
+------+------+
|[1, 2]| []|
|[1, 2]|[1, 3]|
+------+------+Concatenate the two arrays:::
spark
.sql("select array_cat(arr1, arr2) as both_arrays from some_data")
.show()+------------+
| both_arrays|
+------------+
| [1, 2]|
|[1, 2, 1, 3]|
+------------+itachi lets you write Spark SQL code that looks just like Postgres SQL!
Spark SQL extensions installation
--------------Config your spark applications with `spark.sql.extensions`, e.g. `spark.sql.extensions=org.apache.spark.sql.extra.PostgreSQLExtensions`
- org.apache.spark.sql.extra.PostgreSQLExtensions
- org.apache.spark.sql.extra.TeradataExtensionsDatabricks Installation
--------------Create an `init script `_ in DBFS:::
dbutils.fs.mkdirs("dbfs:/databricks/scripts/")
dbutils.fs.put("/databricks/scripts/itachi-install.sh","""
#!/bin/bash
wget --quiet -O /mnt/driver-daemon/jars/itachi_2.12-0.1.0.jar https://repo1.maven.org/maven2/com/github/yaooqinn/itachi_2.12/0.1.0/itachi_2.12-0.1.0.jar""", true)Before starting the cluster, set the Spark Config:::
spark.sql.extensions org.apache.spark.sql.extra.PostgreSQLExtensions
Also set the DBFS file path before starting the cluster:::
dbfs:/databricks/scripts/itachi-install.sh
You can now attach a notebook to the cluster using Postgres SQL syntax.
Spark SQL Compliance
--------------------This is a Spark SQL extension supplying add-on or aliased functions to the Apache Spark SQL builtin standard functions.
The functions in this library take precedence over the native Spark functions in the even of a name conflict.
Contributing
------------**More popular modern dbms system function can be added with your help**