https://github.com/lnyemba/data-transport
read/write data anywhere, Lightweight ETL tool
https://github.com/lnyemba/data-transport
apache-zeppelin database etl etl-pipeline ferretdb jupyter-notebook lightweight-framework mongodb mysql nosql pandas postgresql python sql
Last synced: 9 months ago
JSON representation
read/write data anywhere, Lightweight ETL tool
- Host: GitHub
- URL: https://github.com/lnyemba/data-transport
- Owner: lnyemba
- Created: 2021-07-16T18:23:58.000Z (over 4 years ago)
- Default Branch: master
- Last Pushed: 2025-04-11T02:02:31.000Z (9 months ago)
- Last Synced: 2025-04-12T23:55:36.715Z (9 months ago)
- Topics: apache-zeppelin, database, etl, etl-pipeline, ferretdb, jupyter-notebook, lightweight-framework, mongodb, mysql, nosql, pandas, postgresql, python, sql
- Language: Python
- Homepage: https://healthcareio.the-phi.com/data-transport
- Size: 252 KB
- Stars: 3
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Introduction
This project implements an abstraction of objects that can have access to a variety of data stores, implementing read/write with a simple and expressive interface. This abstraction works with **NoSQL**, **SQL** and **Cloud** data stores and leverages **pandas**.
# Why Use Data-Transport ?
Mostly data scientists that don't really care about the underlying database and would like a simple and consistent way to read/write and move data are well served. Additionally we implemented lightweight Extract Transform Loading API and command line (CLI) tool. Finally it is possible to add pre/post processing pipeline functions to read/write
1. Familiarity with **pandas data-frames**
2. Connectivity **drivers** are included
3. Reading/Writing data from various sources
4. Useful for data migrations or **ETL**
## Installation
Within the virtual environment perform the following :
pip install git+https://github.com/lnyemba/data-transport.git
## Features
- read/write from over a dozen databases
- run ETL jobs seamlessly
- scales and integrates into shared environments like apache zeppelin; jupyterhub; SageMaker; ...
## What's new
Unlike older versions 2.0 and under, we focus on collaborative environments like jupyter-x servers; apache zeppelin:
1. Simpler syntax to create reader or writer
2. auth-file registry that can be referenced using a label
3. duckdb support
## Learn More
We have available notebooks with sample code to read/write against mongodb, couchdb, Netezza, PostgreSQL, Google Bigquery, Databricks, Microsoft SQL Server, MySQL ... Visit [data-transport homepage](https://healthcareio.the-phi.com/data-transport)