https://github.com/intake/intake-parquet
Parquet plugin for Intake
https://github.com/intake/intake-parquet
Last synced: 5 months ago
JSON representation
Parquet plugin for Intake
- Host: GitHub
- URL: https://github.com/intake/intake-parquet
- Owner: intake
- License: bsd-2-clause
- Created: 2018-01-05T19:34:47.000Z (over 8 years ago)
- Default Branch: master
- Last Pushed: 2023-12-15T15:17:27.000Z (over 2 years ago)
- Last Synced: 2024-06-19T00:29:24.720Z (almost 2 years ago)
- Language: Python
- Homepage: https://intake-parquet.readthedocs.io/en/latest/?badge=latest
- Size: 132 KB
- Stars: 11
- Watchers: 7
- Forks: 15
- Open Issues: 6
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Intake-parquet
[](https://travis-ci.org/ContinuumIO/intake-parquet)
[](http://intake-parquet.readthedocs.io/en/latest/?badge=latest)
[Intake data loader](https://github.com/ContinuumIO/intake/) interface to the parquet binary tabular data format.
Parquet is very popular in the big-data ecosystem, because it provides columnar
and chunk-wise access to the data, with efficient encodings and compression. This makes
the format particularly effective for streaming through large subsections of even
larger data-sets, hence it's common use with Hadoop and Spark.
Parquet data may be single files, directories of files, or nested directories, where
the directory names are meaningful in the partitioning of the data.
### Features
The parquet plugin allows for:
- efficient metadata parsing, so you know the data types and number of records without
loading any data
- random access of partitions
- column and index selection, load only the data you need
- passing of value-based filters, that you only load those partitions containing some
valid data (NB: does not filter the values within a partition)
### Installation
The conda install instructions are:
```
conda install -c conda-forge intake-parquet
```
### Examples
See the notebook in the examples/ directory.