https://github.com/alexshtf/exp_prox_pt
Proximal point with exponential losses
https://github.com/alexshtf/exp_prox_pt
machine-learning pytorch scipy
Last synced: about 2 months ago
JSON representation
Proximal point with exponential losses
- Host: GitHub
- URL: https://github.com/alexshtf/exp_prox_pt
- Owner: alexshtf
- License: apache-2.0
- Created: 2024-11-21T07:11:11.000Z (5 months ago)
- Default Branch: master
- Last Pushed: 2025-01-08T16:44:45.000Z (4 months ago)
- Last Synced: 2025-01-22T18:29:22.033Z (3 months ago)
- Topics: machine-learning, pytorch, scipy
- Language: Jupyter Notebook
- Homepage:
- Size: 163 KB
- Stars: 1
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
A SciPy and PyTorch implementation of the proximal operator $\mathrm{prox}_{\eta f}(w)$ of functions of the form:
```math
f(w;\theta, \phi, b, \alpha) = \exp(\langle \theta, w \rangle + b) + \langle \phi, w \rangle + \frac{\alpha}{2} \| w \|_2^2
```
This repository contains two modules, `exp_prox.np` and `exp_prox.torch`, both contain a function with the following signature:
```python
def prox_op(w: Array, eta: Array, theta: Array, phi: Array, b: Union[Float, Array], alpha: Union[Float, Array]) -> Array
```
The functions support mini-batches, by treating all but the last dimension as mini-batch dimensions.Functions of the above family appear, for example, as _regularized_ losses of Poisson regression. To that end, we also have a utility function specifically for incremental Poisson regression. For example, the snippet below implements incremental proximal-point algorithm for Poisson regression:
```python
from exp_prox import poisson_params
from exp_proc.np import prox_opstep_size = 1e-3
reg_coef = 1e-5
w = np.zeros(num_features) # the learned model weights
for X, y in data_set:
w = prox_op(w, *poisson_params(step_size, X, y, reg_coef))
```