Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/zaccharieramzi/hoag
Code for the bi-level experiments of the ICLR 2022 paper "SHINE: SHaring the INverse Estimate from the forward pass for bi-level optimization and implicit models" (on branch shine)
https://github.com/zaccharieramzi/hoag
bi-level-optimization implicit-models quasi-newton-method
Last synced: 3 months ago
JSON representation
Code for the bi-level experiments of the ICLR 2022 paper "SHINE: SHaring the INverse Estimate from the forward pass for bi-level optimization and implicit models" (on branch shine)
- Host: GitHub
- URL: https://github.com/zaccharieramzi/hoag
- Owner: zaccharieramzi
- Fork: true (fabianp/hoag)
- Created: 2021-03-23T09:48:08.000Z (over 3 years ago)
- Default Branch: master
- Last Pushed: 2021-11-21T00:11:59.000Z (almost 3 years ago)
- Last Synced: 2024-07-04T00:58:01.989Z (4 months ago)
- Topics: bi-level-optimization, implicit-models, quasi-newton-method
- Language: Python
- Homepage: https://openreview.net/forum?id=-ApAkox5mp
- Size: 1.16 MB
- Stars: 6
- Watchers: 1
- Forks: 0
- Open Issues: 1
-
Metadata Files:
- Readme: README.rst
Awesome Lists containing this project
README
.. image:: https://travis-ci.org/fabianp/hoag.svg?branch=master
:target: https://travis-ci.org/fabianp/hoagHOAG
====
Hyperparameter optimization with approximate gradient.. image:: https://raw.githubusercontent.com/fabianp/hoag/master/doc/comparison_ho_real_sim.png
:scale: 50 %Depends
-------* scikit-learn 0.16
Usage
-----This package exports a LogisticRegressionCV class which automatically estimates the L2 regularization of logistic regression. As other scikit-learn objects, it has a .fit and .predict method. However, unlike scikit-learn objects, the .fit method takes 4 arguments consisting of the train set and the test set. For example:
>>> from hoag import LogisticRegressionCV
>>> clf = LogisticRegressionCV()
>>> clf.fit(X_train, y_train, X_test, y_test)where X_train, y_train, X_test, y_test are numpy arrays representing the train and test set, respectively.
For full usage example check out `this ipython notebook `_.
.. image:: https://raw.githubusercontent.com/fabianp/hoag/master/doc/hoag_screenshot.png
:target: https://github.com/fabianp/hoag/blob/master/doc/example_usage.ipynbUsage tips
----------Standardize features of the input data such that each feature has unit variance. This makes the Hessian better conditioned. This can be done using e.g. scikit-learn's StandardScaler.
Citing
------If you use this, please cite it as
.. code-block::
@inproceedings{PedregosaHyperparameter16,
author = {Fabian Pedregosa},
title = {Hyperparameter optimization with approximate gradient},
booktitle = {Proceedings of the 33nd International Conference on Machine Learning,
{ICML}},
year = {2016},
url = {http://jmlr.org/proceedings/papers/v48/pedregosa16.html},
}