Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/flatironinstitute/nifty-ls
A fast Lomb-Scargle periodogram. It's nifty, and uses a NUFFT!
https://github.com/flatironinstitute/nifty-ls
lomb-scargle-periodogram spectral-analysis time-series
Last synced: about 1 month ago
JSON representation
A fast Lomb-Scargle periodogram. It's nifty, and uses a NUFFT!
- Host: GitHub
- URL: https://github.com/flatironinstitute/nifty-ls
- Owner: flatironinstitute
- License: apache-2.0
- Created: 2024-03-07T16:39:29.000Z (10 months ago)
- Default Branch: main
- Last Pushed: 2024-09-13T19:26:19.000Z (4 months ago)
- Last Synced: 2024-09-15T03:58:33.298Z (4 months ago)
- Topics: lomb-scargle-periodogram, spectral-analysis, time-series
- Language: Python
- Homepage:
- Size: 340 KB
- Stars: 26
- Watchers: 6
- Forks: 1
- Open Issues: 4
-
Metadata Files:
- Readme: README.md
- Changelog: CHANGES.md
- License: LICENSE
Awesome Lists containing this project
README
# nifty-ls
A fast Lomb-Scargle periodogram. It's nifty, and uses a NUFFT![![PyPI](https://img.shields.io/pypi/v/nifty-ls)](https://pypi.org/project/nifty-ls/) [![Tests](https://github.com/flatironinstitute/nifty-ls/actions/workflows/tests.yml/badge.svg)](https://github.com/flatironinstitute/nifty-ls/actions/workflows/tests.yml) [![pre-commit.ci status](https://results.pre-commit.ci/badge/github/flatironinstitute/nifty-ls/main.svg)](https://results.pre-commit.ci/latest/github/flatironinstitute/nifty-ls/main) [![Jenkins Tests](https://jenkins.flatironinstitute.org/buildStatus/icon?job=nifty-ls%2Fmain&subject=Jenkins%20Tests)](https://jenkins.flatironinstitute.org/job/nifty-ls/job/main/) [![arXiv](https://img.shields.io/badge/arXiv-2409.08090-b31b1b.svg)](https://arxiv.org/abs/2409.08090)
## Overview
The Lomb-Scargle periodogram, used for identifying periodicity in irregularly-spaced
observations, is useful but computationally expensive. However, it can be
phrased mathematically as a pair of non-uniform FFTs (NUFFTs). This allows us to
leverage Flatiron Institute's [finufft](https://github.com/flatironinstitute/finufft/)
package, which is really fast! It also enables GPU (CUDA) support and is
several orders of magnitude more accurate than
[Astropy's Lomb Scargle](https://docs.astropy.org/en/stable/timeseries/lombscargle.html)
with default settings.## Background
The [Press & Rybicki (1989) method](https://ui.adsabs.harvard.edu/abs/1989ApJ...338..277P/abstract) for Lomb-Scargle poses the computation as four weighted trigonometric sums that are solved with a pair of FFTs by "extirpolation" to an equi-spaced grid. Specifically, the sums are of the form:```math
\begin{align}
S_k &= \sum_{j=1}^M h_j \sin(2 \pi f_k t_j), \\
C_k &= \sum_{j=1}^M h_j \cos(2 \pi f_k t_j),
\end{align}
```where the $k$ subscript runs from 0 to $N$, the number of frequency bins, $f_k$ is the cyclic frequency of bin $k$, $t_j$ are the observation times (of which there are $M$), and $h_j$ are the weights.
The key observation for our purposes is that this is exactly what a non-uniform FFT computes! Specifically, a "type-1" (non-uniform to uniform) complex NUFFT in the [finufft convention](https://finufft.readthedocs.io/en/latest/math.html) computes:
```math
g_k = \sum_{j=1}^M h_j e^{i k t_j}.
```The complex and real parts of this transform are Press & Rybicki's $S_k$ and $C_k$, with some adjustment for cyclic/angular frequencies, domain of $k$, real vs. complex transform, etc. finufft has a particularly fast and accurate spreading kernel ("exponential of semicircle") that it uses instead of Press & Rybicki's extirpolation.
There is some pre- and post-processing of $S_k$ and $C_k$ to compute the periodogram, which can become the bottleneck because finufft is so fast. This package also optimizes and parallelizes those computations.
## Installation
### From PyPI
For CPU support:```console
$ pip install nifty-ls
```For GPU (CUDA) support:
```console
$ pip install nifty-ls[cuda]
```The default is to install with CUDA 12 support; one can use `nifty-ls[cuda11]` instead for CUDA 11 support (installs `cupy-cuda11x`).
### From source
First, clone the repo and `cd` to the repo root:
```console
$ git clone https://www.github.com/flatironinstitute/nifty-ls
$ cd nifty-ls
```Then, to install with CPU support:
```console
$ pip install .
```To install with GPU (CUDA) support:
```console
$ pip install .[cuda]
```or `.[cuda11]` for CUDA 11.
For development (with automatic rebuilds enabled by default in `pyproject.toml`):
```console
$ pip install nanobind scikit-build-core
$ pip install -e .[test] --no-build-isolation
```Developers may also be interested in setting these keys in `pyproject.toml`:
```toml
[tool.scikit-build]
cmake.build-type = "Debug"
cmake.verbose = true
install.strip = false
```### For best performance
You may wish to compile and install finufft and cufinufft yourself so they will be
built with optimizations for your hardware. To do so, first install nifty-ls, then
follow the Python installation instructions for
[finufft](https://finufft.readthedocs.io/en/latest/install.html#building-a-python-interface-to-a-locally-compiled-library)
and
[cufinufft](https://finufft.readthedocs.io/en/latest/install_gpu.html#python-interface),
configuring the libraries as desired.nifty-ls can likewise be built from source following the instructions above for
best performance, but most of the heavy computations are offloaded to (cu)finufft,
so the performance benefit is minimal.## Usage
### From Astropy
Importing `nifty_ls` makes nifty-ls available via `method="fastnifty"` in
Astropy's LombScargle module. The name is prefixed with "fast" as it's part
of the fast family of methods that assume a regularly-spaced frequency grid.```python
import nifty_ls
from astropy.timeseries import LombScargle
frequency, power = LombScargle(t, y).autopower(method="fastnifty")
```Full example
```python
import matplotlib.pyplot as plt
import nifty_ls
import numpy as np
from astropy.timeseries import LombScarglerng = np.random.default_rng(seed=123)
N = 1000
t = rng.uniform(0, 100, size=N)
y = np.sin(50 * t) + 1 + rng.poisson(size=N)frequency, power = LombScargle(t, y).autopower(method='fastnifty')
plt.plot(frequency, power)
plt.xlabel('Frequency (cycles per unit time)')
plt.ylabel('Power')
```To use the CUDA (cufinufft) backend, pass the appropriate argument via `method_kws`:
```python
frequency, power = LombScargle(t, y).autopower(method="fastnifty", method_kws=dict(backend="cufinufft"))
```In many cases, accelerating your periodogram is as simple as setting the `method`
in your Astropy Lomb Scargle code! More advanced usage, such as computing multiple
periodograms in parallel, should go directly through the nifty-ls interface.### From nifty-ls (native interface)
nifty-ls has its own interface that offers more flexibility than the Astropy
interface for batched periodograms.#### Single periodograms
A single periodogram can be computed through nifty-ls as:
```python
import nifty_ls
# with automatic frequency grid:
nifty_res = nifty_ls.lombscargle(t, y, dy)# with user-specified frequency grid:
nifty_res = nifty_ls.lombscargle(t, y, dy, fmin=0.1, fmax=10, Nf=10**6)
```Full example
```python
import nifty_ls
import numpy as nprng = np.random.default_rng(seed=123)
N = 1000
t = np.sort(rng.uniform(0, 100, size=N))
y = np.sin(50 * t) + 1 + rng.poisson(size=N)# with automatic frequency grid:
nifty_res = nifty_ls.lombscargle(t, y)# with user-specified frequency grid:
nifty_res = nifty_ls.lombscargle(t, y, fmin=0.1, fmax=10, Nf=10**6)plt.plot(nifty_res.freq(), nifty_res.power)
plt.xlabel('Frequency (cycles per unit time)')
plt.ylabel('Power')
```#### Batched Periodograms
Batched periodograms (multiple objects with the same observation times) can be
computed as:```python
import nifty_ls
import numpy as npN_t = 100
N_obj = 10
Nf = 200rng = np.random.default_rng()
t = np.sort(rng.random(N_t))
obj_freqs = rng.random(N_obj).reshape(-1,1)
y_batch = np.sin(obj_freqs * t)
dy_batch = rng.random(y_batch.shape)batched = nifty_ls.lombscargle(t, y_batch, dy_batch, Nf=Nf)
print(batched.power.shape) # (10, 200)
```Note that this computes multiple periodograms simultaneously on a set of time
series with the same observation times. This approach is particularly efficient
for short time series, and/or when using the GPU.Support for batching multiple time series with distinct observation times is
not currently implemented, but is planned.### Limitations
The code only supports frequency grids with fixed spacing; however, finufft does
support type 3 NUFFTs (non-uniform to non-uniform), which would enable arbitrary
frequency grids. It's not clear how useful this is, so it hasn't been implemented,
but please open a GitHub issue if this is of interest to you.## Performance
Using 16 cores of an Intel Icelake CPU and a NVIDIA A100 GPU, we obtain the following performance. First, we'll look at results from a single periodogram (i.e. unbatched):
![benchmarks](bench.png)
In this case, finufft is 5x faster (11x with threads) than Astropy for large transforms, and 2x faster for (very) small transforms. Small transforms improve futher relative to Astropy with more frequency bins. (Dynamic multi-threaded dispatch of transforms is planned as a future feature which will especially benefit small $N$.)
cufinufft is 200x faster than Astropy for large $N$! The performance plateaus towards small $N$, mostly due to the overhead of sending data to the GPU and fetching the result. (Concurrent job execution on the GPU is another planned feature, which will especially help small $N$.)
The following demonstrates "batch mode", in which 10 periodograms are computed from 10 different time series with the same observation times:
![batched benchmarks](bench_batch.png)
Here, the finufft single-threaded advantage is consistently 6x across problem sizes, while the multi-threaded advantage is up to 30x for large transforms.
The 200x advantage of the GPU extends to even smaller $N$ in this case, since we're sending and receiving more data at once.
We see that both multi-threaded finufft and cufinufft particularly benefit from batched transforms, as this exposes more parallelism and amortizes fixed latencies.
We use `FFTW_MEASURE` for finufft in these benchmarks, which improves performance a few tens of percents.
Multi-threading hurts the performance of small problem sizes; the default behavior of nifty-ls is to use fewer threads in such cases. The "multi-threaded" line uses between 1 and 16 threads.
On the CPU, nifty-ls gets its performance not only through its use of finufft, but also
by offloading the pre- and post-processing steps to compiled extensions. The extensions
enable us to do much more processing element-wise, rather than array-wise. In other words,
they enable "kernel fusion" (to borrow a term from GPU computing), increasing the compute
density.## Accuracy
While we compared performance with Astropy's `fast` method, this isn't quite fair. nifty-ls is much more accurate than Astropy `fast`! Astropy `fast` uses Press & Rybicki's extirpolation approximation, trading accuracy for speed, but thanks to finufft, nifty-ls can have both.In the figure below, we plot the median periodogram error in circles and the 99th percentile error in triangles for astropy, finufft, and cufinufft for a range of $N$ (and default $N_F \approx 12N$).
The astropy result is presented for two cases: a nominal case and a "worst case". Internally, astropy uses an FFT grid whose size is the next power of 2 above the target oversampling rate. Each jump to a new power of 2 typically yields an increase in accuracy. The "worst case", therefore, is the highest frequency that does not yield such a jump.
![](accuracy.png)
Errors of $\mathcal{O}(10\\%)$ or greater are common with worst-case evaluations. Errors of $\mathcal{O}(1\\%)$ or greater are common in typical evaluations. nifty-ls is conservatively 6 orders of magnitude more accurate.
The reference result in the above figure comes from the "phase winding" method, which uses trigonometric identities to avoid expensive sin and cos evaluations. One can also use astropy's `fast` method as a reference with exact evaluation enabled via `use_fft=False`. One finds the same result, but the phase winding is a few orders of magnitude faster (but still not competitive with finufft).
In summary, nifty-ls is highly accurate while also giving high performance.
### float32 vs float64
While 32-bit floats provide a substantial speedup for finufft and cufinufft, we generally don't recommend their use for Lomb-Scargle. The reason is the challenging [condition number](https://en.wikipedia.org/wiki/Condition_number) of the problem. The condition number is the response in the output to a small perturbation in the input—in other words, the derivative. [It can easily be shown](https://finufft.readthedocs.io/en/latest/trouble.html) that the derivative of a NUFFT with respect to the non-uniform points is proportional to $N$, the transform length (i.e. the number of modes). In other words, errors in the observation times are amplified by $\mathcal{O}(N)$. Since float32 has a relative error of $\mathcal{O}(10^{-7})$, transforms of length $10^5$ already suffer $\mathcal{O}(1\\%)$ error. Therefore, we focus on float64 in nifty-ls, but float32 is also natively supported by all backends for adventurous users.The condition number is also a likely contributor to the mild upward trend in error versus $N$ in the above figure, at least for finufft/cufinufft. With a relative error of $\mathcal{O}(10^{-16})$ for float64 and a transform length of $\mathcal{O}(10^{6})$, the minimum error is $\mathcal{O}(10^{-10})$.
## Testing
First, install from source (`pip install .[test]`). Then, from the repo root, run:```console
$ pytest
```The tests are defined in the `tests/` directory, and include a mini-benchmark of
nifty-ls and Astropy, shown below:```
$ pytest
======================================================== test session starts =========================================================
platform linux -- Python 3.10.13, pytest-8.1.1, pluggy-1.4.0
benchmark: 4.0.0 (defaults: timer=time.perf_counter disable_gc=True min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
rootdir: /mnt/home/lgarrison/nifty-ls
configfile: pyproject.toml
plugins: benchmark-4.0.0, asdf-2.15.0, anyio-3.6.2, hypothesis-6.23.1
collected 36 itemstests/test_ls.py ...................... [ 61%]
tests/test_perf.py .............. [100%]----------------------------------------- benchmark 'Nf=1000': 5 tests ----------------------------------------
Name (time in ms) Min Mean StdDev Rounds Iterations
---------------------------------------------------------------------------------------------------------------
test_batched[finufft-1000] 6.8418 (1.0) 7.1821 (1.0) 0.1831 (1.32) 43 1
test_batched[cufinufft-1000] 7.7027 (1.13) 8.6634 (1.21) 0.9555 (6.89) 74 1
test_unbatched[finufft-1000] 110.7541 (16.19) 111.0603 (15.46) 0.1387 (1.0) 10 1
test_unbatched[astropy-1000] 441.2313 (64.49) 441.9655 (61.54) 1.0732 (7.74) 5 1
test_unbatched[cufinufft-1000] 488.2630 (71.36) 496.0788 (69.07) 6.1908 (44.63) 5 1
------------------------------------------------------------------------------------------------------------------------------------------------ benchmark 'Nf=10000': 3 tests ----------------------------------
Name (time in ms) Min Mean StdDev Rounds Iterations
--------------------------------------------------------------------------------------------------
test[finufft-10000] 1.8481 (1.0) 1.8709 (1.0) 0.0347 (1.75) 507 1
test[cufinufft-10000] 5.1269 (2.77) 5.2052 (2.78) 0.3313 (16.72) 117 1
test[astropy-10000] 8.1725 (4.42) 8.2176 (4.39) 0.0198 (1.0) 113 1
------------------------------------------------------------------------------------------------------------------------------------- benchmark 'Nf=100000': 3 tests ----------------------------------
Name (time in ms) Min Mean StdDev Rounds Iterations
-----------------------------------------------------------------------------------------------------
test[cufinufft-100000] 5.8566 (1.0) 6.0411 (1.0) 0.7407 (10.61) 159 1
test[finufft-100000] 6.9766 (1.19) 7.1816 (1.19) 0.0748 (1.07) 132 1
test[astropy-100000] 47.9246 (8.18) 48.0828 (7.96) 0.0698 (1.0) 19 1
------------------------------------------------------------------------------------------------------------------------------------------ benchmark 'Nf=1000000': 3 tests --------------------------------------
Name (time in ms) Min Mean StdDev Rounds Iterations
------------------------------------------------------------------------------------------------------------
test[cufinufft-1000000] 8.0038 (1.0) 8.5193 (1.0) 1.3245 (1.62) 84 1
test[finufft-1000000] 74.9239 (9.36) 76.5690 (8.99) 0.8196 (1.0) 10 1
test[astropy-1000000] 1,430.4282 (178.72) 1,434.7986 (168.42) 5.5234 (6.74) 5 1
------------------------------------------------------------------------------------------------------------Legend:
Outliers: 1 Standard Deviation from Mean; 1.5 IQR (InterQuartile Range) from 1st Quartile and 3rd Quartile.
OPS: Operations Per Second, computed as 1 / Mean
======================================================== 36 passed in 30.81s =========================================================
```The results were obtained using 16 cores of an Intel Icelake CPU and 1 NVIDIA A100 GPU.
The ratio of the runtime relative to the fastest are shown in parentheses. You may obtain
very different performance on your platform! The slowest Astropy results in particular may
depend on the Numpy distribution you have installed and its trig function performance.## Authors
nifty-ls was originally implemented by [Lehman Garrison](https://github.com/lgarrison)
based on work done by [Dan Foreman-Mackey](https://github.com/dfm) in the
[dfm/nufft-ls](https://github.com/dfm/nufft-ls) repo, with consulting from
[Alex Barnett](https://github.com/ahbarnett).## Citation
If you use nifty-ls in an academic work, please cite our RNAAS research note:```bibtex
@article{Garrison_2024,
doi = {10.3847/2515-5172/ad82cd},
url = {https://dx.doi.org/10.3847/2515-5172/ad82cd},
year = {2024},
month = {oct},
publisher = {The American Astronomical Society},
volume = {8},
number = {10},
pages = {250},
author = {Lehman H. Garrison and Dan Foreman-Mackey and Yu-hsuan Shih and Alex Barnett},
title = {nifty-ls: Fast and Accurate Lomb–Scargle Periodograms Using a Non-uniform FFT},
journal = {Research Notes of the AAS},
abstract = {We present nifty-ls, a software package for fast and accurate evaluation of the Lomb–Scargle periodogram. nifty-ls leverages the fact that Lomb–Scargle can be computed using a non-uniform fast Fourier transform (NUFFT), which we evaluate with the Flatiron Institute NUFFT package (finufft). This approach achieves a many-fold speedup over the Press & Rybicki method as implemented in Astropy and is simultaneously many orders of magnitude more accurate. nifty-ls also supports fast evaluation on GPUs via CUDA and integrates with the Astropy Lomb–Scargle interface. nifty-ls is publicly available at https://github.com/flatironinstitute/nifty-ls/.}
}
```A pre-print of the article is available on arXiv: https://arxiv.org/abs/2409.08090
## Acknowledgements
nifty-ls builds directly on top of the excellent finufft package by Alex Barnett
and others (see the [finufft Acknowledgements](https://finufft.readthedocs.io/en/latest/ackn.html)).Many parts of this package are an adaptation of [Astropy LombScargle](https://docs.astropy.org/en/stable/timeseries/lombscargle.html), in particular the Press & Rybicki (1989) method.