Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/microprediction/awesome-python-benchmarks

Benchmarking for python analytic packages
https://github.com/microprediction/awesome-python-benchmarks

List: awesome-python-benchmarks

Last synced: about 1 month ago
JSON representation

Benchmarking for python analytic packages

Awesome Lists containing this project

README

        

# awesome-python-benchmarks

Statistical benchmarking of python packages.

## Machine-learning benchmarks

* [Papers with Code](https://paperswithcode.com/) contains many benchmarks in different categories. For instance the [ImageNet classification](https://paperswithcode.com/sota/image-classification-on-imagenet) lists papers and methods that have performed well, many of which are in Python.

## Python time-series benchmarking

* [MCompetitions](https://github.com/Mcompetitions) repository lists winning methods for the M4 and [M5](https://github.com/Mcompetitions/M5-methods/tree/master/Code%20of%20Winning%20Methods) contests. For example the [LightGBM approach document](https://github.com/Mcompetitions/M5-methods/blob/master/Code%20of%20Winning%20Methods/U1/M5%20Winning%20Submission.docx) can be found there, alongside other winners.

* [Time-Series Elo ratings](https://microprediction.github.io/timeseries-elo-ratings/html_leaderboards/overall.html) considers methods for autonomous univariate prediction of relatively short sequences (400 lags) and ranks performance on predictions from 1 to 34 steps ahead.

* Papers with code has a couple of benchmarks such as [etth1](https://paperswithcode.com/sota/time-series-forecasting-on-etth1-24).

## Python black-box derivative free benchmarking

* [Coco](https://github.com/numbbo/coco) is a platform for comparing continuous optimizers, as explained in the [paper](https://arxiv.org/pdf/1603.08785.pdf).

* [BBOB workshop series](http://numbbo.github.io/workshops/index.html) features ten workshops, most recently the [2019 workshop](http://numbbo.github.io/workshops/BBOB-2019/index.html) on black box methods.

* [Nevergrad benchmarking suite](https://facebookresearch.github.io/nevergrad/benchmarking.html#) is discussed in this [paper](https://arxiv.org/pdf/2010.04542.pdf).

* [Optimizer Elo ratings](https://microprediction.github.io/optimizer-elo-ratings/html_leaderboards/overall.html) rates a hundred approaches to derivative free optimization on an ongoing basis, with methods taken from packages such as NLOPT, Nevergrad, BayesOpt, PySOT, Skopt, Bobyqa, Hebo, Optuna and many others.

## R Time-series

* [ForecastBenchmark](https://github.com/DescartesResearch/ForecastBenchmark) automatically evaluates and ranks forecasting methods based on their performance in a diverse set of evaluation scenarios. The benchmark comprises four different use cases, each covering 100 heterogeneous time series taken from different domains.