{"id":13528557,"url":"https://github.com/twosigma/flint","last_synced_at":"2025-04-12T14:56:06.269Z","repository":{"id":57729230,"uuid":"71383857","full_name":"twosigma/flint","owner":"twosigma","description":"A Time Series Library for Apache Spark","archived":false,"fork":false,"pushed_at":"2020-07-03T12:32:23.000Z","size":1503,"stargazers_count":1015,"open_issues_count":44,"forks_count":187,"subscribers_count":75,"default_branch":"master","last_synced_at":"2025-04-03T14:09:49.008Z","etag":null,"topics":["spark","timeseries"],"latest_commit_sha":null,"homepage":"","language":"Scala","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/twosigma.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2016-10-19T17:44:15.000Z","updated_at":"2025-04-01T18:25:19.000Z","dependencies_parsed_at":"2022-09-10T21:51:23.837Z","dependency_job_id":null,"html_url":"https://github.com/twosigma/flint","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/twosigma%2Fflint","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/twosigma%2Fflint/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/twosigma%2Fflint/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/twosigma%2Fflint/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/twosigma","download_url":"https://codeload.github.com/twosigma/flint/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248586244,"owners_count":21128996,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["spark","timeseries"],"created_at":"2024-08-01T07:00:21.151Z","updated_at":"2025-04-12T14:56:06.246Z","avatar_url":"https://github.com/twosigma.png","language":"Scala","readme":"# Flint: A Time Series Library for Apache Spark\n\nThe ability to analyze time series data at scale is critical for the success of finance and IoT applications based on Spark.\nFlint is Two Sigma's implementation of highly optimized time series operations in Spark.\nIt performs truly parallel and rich analyses on time series data by taking advantage of the natural ordering in time series data to provide locality-based optimizations.\n\nFlint is an open source library for Spark based around the `TimeSeriesRDD`, a time series aware data structure, and a collection of time series utility and analysis functions that use `TimeSeriesRDD`s.\nUnlike `DataFrame` and `Dataset`, Flint's `TimeSeriesRDD`s can leverage the existing ordering properties of datasets at rest and the fact that almost all data manipulations and analysis over these datasets respect their temporal ordering properties.\nIt differs from other time series efforts in Spark in its ability to efficiently compute across panel data or on large scale high frequency data.\n\n[![Documentation Status](https://readthedocs.org/projects/ts-flint/badge/?version=latest)](http://ts-flint.readthedocs.io/en/latest/?badge=latest)\n\n## Requirements\n\n| Dependency     | Version           |\n| -------------- | ----------------- |\n| Spark Version  |  2.3 and 2.4      |\n| Scala Version  |  2.12             |\n| Python Version |  3.5 and above    |\n\n\n## How to install\nScala artifact is published in maven central:\n\nhttps://mvnrepository.com/artifact/com.twosigma/flint\n\nPython artifact is published in PyPi:\n\nhttps://pypi.org/project/ts-flint\n\nNote you will need both Scala and Python artifact to use Flint with PySpark.\n\n## How to build\nTo build from source:\n\nScala (in top-level dir):\n\n```bash\nsbt assemblyNoTest\n```\n\nPython (in python subdir):\n```bash\npython setup.py install\n```\nor\n```bash\npip install .\n```\n\n## Python bindings\n\nThe python bindings for Flint, including quickstart instructions, are documented at [python/README.md](python/README.md).\nAPI documentation is available at http://ts-flint.readthedocs.io/en/latest/.\n\n## Getting Started\n\n### Starting Point: `TimeSeriesRDD` and `TimeSeriesDataFrame`\nThe entry point into all functionalities for time series analysis in Flint is `TimeSeriesRDD` (for Scala) and `TimeSeriesDataFrame` (for Python). In high level, a `TimeSeriesRDD` contains an `OrderedRDD` which could be used to represent a sequence of ordering key-value pairs. A `TimeSeriesRDD` uses `Long` to represent timestamps in nanoseconds since epoch as keys and `InternalRow`s as values for `OrderedRDD` to represent a time series data set.\n\n### Create `TimeSeriesRDD`\n\nApplications can create a `TimeSeriesRDD` from an existing `RDD`, from an `OrderedRDD`, from a `DataFrame`, or from a single csv file.\n\nAs an example, the following creates a `TimeSeriesRDD` from a gzipped CSV file with header and specific datetime format.\n\n```scala\nimport com.twosigma.flint.timeseries.CSV\nval tsRdd = CSV.from(\n  sqlContext,\n  \"file://foo/bar/data.csv\",\n  header = true,\n  dateFormat = \"yyyyMMdd HH:mm:ss.SSS\",\n  codec = \"gzip\",\n  sorted = true\n)\n```\n\nTo create a `TimeSeriesRDD` from a `DataFrame`, you have to make sure the `DataFrame` contains a column named \"time\" of type `LongType`.\n\n```scala\nimport com.twosigma.flint.timeseries.TimeSeriesRDD\nimport scala.concurrent.duration._\nval df = ... // A DataFrame whose rows have been sorted by their timestamps under \"time\" column\nval tsRdd = TimeSeriesRDD.fromDF(dataFrame = df)(isSorted = true, timeUnit = MILLISECONDS)\n```\n\nOne could also create a `TimeSeriesRDD` from a `RDD[Row]` or an `OrderedRDD[Long, Row]` by providing a schema, e.g.\n\n```scala\nimport com.twosigma.flint.timeseries._\nimport scala.concurrent.duration._\nval rdd = ... // An RDD whose rows have sorted by their timestamps\nval tsRdd = TimeSeriesRDD.fromRDD(\n  rdd,\n  schema = Schema(\"time\" -\u003e LongType, \"price\" -\u003e DoubleType)\n)(isSorted = true,\n  timeUnit = MILLISECONDS\n)\n```\n\nIt is also possible to create a `TimeSeriesRDD` from a dataset stored as parquet format file(s). The `TimeSeriesRDD.fromParquet()` function provides the option to specify which columns and/or the time range you are interested, e.g.\n\n```scala\nimport com.twosigma.flint.timeseries._\nimport scala.concurrent.duration._\nval tsRdd = TimeSeriesRDD.fromParquet(\n  sqlContext,\n  path = \"hdfs://foo/bar/\"\n)(isSorted = true,\n  timeUnit = MILLISECONDS,\n  columns = Seq(\"time\", \"id\", \"price\"),  // By default, null for all columns\n  begin = \"20100101\",                    // By default, null for no boundary at begin\n  end = \"20150101\"                       // By default, null for no boundary at end\n)\n```\n\n### Group functions\n\nA group function is to group rows with nearby (or exactly the same) timestamps.\n\n- `groupByCycle` A function to group rows within a cycle, i.e. rows with exactly the same timestamps. For example,\n\n```scala\nval priceTSRdd = ...\n// A TimeSeriesRDD with columns \"time\" and \"price\"\n// time  price\n// -----------\n// 1000L 1.0\n// 1000L 2.0\n// 2000L 3.0\n// 2000L 4.0\n// 2000L 5.0\n\nval results = priceTSRdd.groupByCycle()\n// time  rows\n// ------------------------------------------------\n// 1000L [[1000L, 1.0], [1000L, 2.0]]\n// 2000L [[2000L, 3.0], [2000L, 4.0], [2000L, 5.0]]\n```\n\n- `groupByInterval` A function to group rows whose timestamps fall into an interval. Intervals could be defined by another `TimeSeriesRDD`. Its timestamps will be used to defined intervals, i.e. two sequential timestamps define an interval. For example,\n\n```scala\nval priceTSRdd = ...\n// A TimeSeriesRDD with columns \"time\" and \"price\"\n// time  price\n// -----------\n// 1000L 1.0\n// 1500L 2.0\n// 2000L 3.0\n// 2500L 4.0\n\nval clockTSRdd = ...\n// A TimeSeriesRDD with only column \"time\"\n// time\n// -----\n// 1000L\n// 2000L\n// 3000L\n\nval results = priceTSRdd.groupByInterval(clockTSRdd)\n// time  rows\n// ----------------------------------\n// 1000L [[1000L, 1.0], [1500L, 2.0]]\n// 2000L [[2000L, 3.0], [2500L, 4.0]]\n```\n\n- `addWindows` For each row, this function adds a new column whose value for a row is a list of rows within its `window`.\n\n```scala\nval priceTSRdd = ...\n// A TimeSeriesRDD with columns \"time\" and \"price\"\n// time  price\n// -----------\n// 1000L 1.0\n// 1500L 2.0\n// 2000L 3.0\n// 2500L 4.0\n\nval result = priceTSRdd.addWindows(Window.pastAbsoluteTime(\"1000ns\"))\n// time  price window_past_1000ns\n// ------------------------------------------------------\n// 1000L 1.0   [[1000L, 1.0]]\n// 1500L 2.0   [[1000L, 1.0], [1500L, 2.0]]\n// 2000L 3.0   [[1000L, 1.0], [1500L, 2.0], [2000L, 3.0]]\n// 2500L 4.0   [[1500L, 2.0], [2000L, 3.0], [2500L, 4.0]]\n```\n\n### Temporal Join Functions\n\nA temporal join function is a join function defined by a matching criteria over time. A `tolerance` in temporal join matching criteria specifies how much it should look past or look futue.\n\n- `leftJoin` A function performs the temporal left-join to the right `TimeSeriesRDD`, i.e. left-join using inexact timestamp matches.  For each row in the left, append the most recent row from the right at or before the same time. An example to join two `TimeSeriesRDD`s is as follows.\n\n```scala\nval leftTSRdd = ...\nval rightTSRdd = ...\nval result = leftTSRdd.leftJoin(rightTSRdd, tolerance = \"1day\")\n```\n\n- `futureLeftJoin` A function performs the temporal future left-join to the right `TimeSeriesRDD`, i.e. left-join using inexact timestamp matches. For each row in the left, appends the closest future row from the right at or after the same time.\n\n```scala\nval result = leftTSRdd.futureLeftJoin(rightTSRdd, tolerance = \"1day\")\n```\n\n### Summarize Functions\n\nSummarize functions are the functions to apply summarizer(s) to rows within a certain period, like cycle, interval, windows, etc.\n\n- `summarizeCycles` A function computes aggregate statistics of rows that are within a cycle, i.e. rows share a timestamp.\n\n```scala\nval volTSRdd = ...\n// A TimeSeriesRDD with columns \"time\", \"id\", and \"volume\"\n// time  id volume\n// ------------\n// 1000L 1  100\n// 1000L 2  200\n// 2000L 1  300\n// 2000L 2  400\n\nval result = volTSRdd.summarizeCycles(Summary.sum(\"volume\"))\n// time  volume_sum\n// ----------------\n// 1000L 300\n// 2000L 700\n```\n\nSimilarly, we could summarize over intervals, windows, or the whole time series data set. See\n\n- `summarizeIntervals`\n- `summarizeWindows`\n- `addSummaryColumns`\n\nOne could check `timeseries.summarize.summarizer` for different kinds of summarizer(s), like `ZScoreSummarizer`, `CorrelationSummarizer`, `NthCentralMomentSummarizer` etc.\n\n## Contributing\n\nIn order to accept your code contributions, please fill out the appropriate Contributor License Agreement in the `cla` folder and submit it to tsos@twosigma.com.\n\n## Disclaimer\n\nApache Spark is a trademark of The Apache Software Foundation. The Apache Software Foundation is not affiliated, endorsed, connected, sponsored or otherwise associated in any way to Two Sigma, Flint, or this website in any manner.\n\n© Two Sigma Open Source, LLC\n","funding_links":[],"categories":["Libraries","📦 Packages","编程","大数据"],"sub_categories":["Spark","Python"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftwosigma%2Fflint","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ftwosigma%2Fflint","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftwosigma%2Fflint/lists"}