https://github.com/sandialabs/poblano_toolbox
Nonlinear optimization for MATLAB.
https://github.com/sandialabs/poblano_toolbox
decomposition-methods matlab optimization scr-1255 snl-applications snl-data-analysis snl-science-libs tensor
Last synced: 18 days ago
JSON representation
Nonlinear optimization for MATLAB.
- Host: GitHub
- URL: https://github.com/sandialabs/poblano_toolbox
- Owner: sandialabs
- License: other
- Created: 2019-04-25T16:36:12.000Z (about 6 years ago)
- Default Branch: main
- Last Pushed: 2020-06-17T18:57:49.000Z (almost 5 years ago)
- Last Synced: 2025-04-06T22:38:49.770Z (about 1 month ago)
- Topics: decomposition-methods, matlab, optimization, scr-1255, snl-applications, snl-data-analysis, snl-science-libs, tensor
- Language: MATLAB
- Homepage: https://sandialabs.github.io/poblano_toolbox/
- Size: 304 KB
- Stars: 28
- Watchers: 4
- Forks: 14
- Open Issues: 4
-
Metadata Files:
- Readme: README.md
- License: LICENSE.txt
Awesome Lists containing this project
README
# Poblano Toolbox for MATLAB
```
Copyright 2009 National Technology & Engineering Solutions of Sandia,
LLC (NTESS). Under the terms of Contract DE-NA0003525 with NTESS, the
U.S. Government retains certain rights in this software.
```Poblano is a Matlab toolbox of large-scale algorithms for
unconstrained nonlinear optimization problems. The algorithms in
Poblano require only first-order derivative information (e.g.,
gradients for scalar-valued objective functions), and therefore can
scale to very large problems. The driving application for Poblano
development has been tensor decompositions in data analysis
applications (bibliometric analysis, social network analysis,
chemometrics, etc.).Poblano optimizers find local minimizers of scalar-valued objective
functions taking vector inputs. The gradient (i.e., first derivative)
of the objective function is required for all Poblano optimizers. The
optimizers converge to a stationary point where the gradient is
approximately zero. A line search satisfying the strong Wolfe
conditions is used to guarantee global convergence of the Poblano
optimizers. The optimization methods in Poblano include several
nonlinear conjugate gradient methods (Fletcher-Reeves, Polak-Ribiere,
Hestenes-Stiefel), a limited-memory quasi-Newton method using BFGS
updates to approximate second-order derivative information, and a
truncated Newton method using finite differences to approximate
second-order derivative information.