{"id":15047430,"url":"https://github.com/kthohr/optim","last_synced_at":"2025-05-16T18:07:10.254Z","repository":{"id":39636681,"uuid":"94275048","full_name":"kthohr/optim","owner":"kthohr","description":"OptimLib: a lightweight C++ library of numerical optimization methods for nonlinear functions","archived":false,"fork":false,"pushed_at":"2024-04-28T14:30:24.000Z","size":12282,"stargazers_count":846,"open_issues_count":11,"forks_count":138,"subscribers_count":38,"default_branch":"master","last_synced_at":"2025-04-03T17:13:57.853Z","etag":null,"topics":["armadillo","automatic-differentiation","bfgs","cpp","cpp11","differential-evolution","eigen","eigen3","evolutionary-algorithms","gradient-descent","lbfgs","newton","numerical-optimization-methods","openmp","openmp-parallelization","optim","optimization","optimization-algorithms","particle-swarm-optimization"],"latest_commit_sha":null,"homepage":"https://optimlib.readthedocs.io/en/latest/","language":"C++","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/kthohr.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2017-06-14T01:31:47.000Z","updated_at":"2025-03-30T11:08:43.000Z","dependencies_parsed_at":"2022-07-13T05:10:26.192Z","dependency_job_id":"52c6336b-c798-4bce-b159-2015c10bfefc","html_url":"https://github.com/kthohr/optim","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/kthohr%2Foptim","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/kthohr%2Foptim/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/kthohr%2Foptim/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/kthohr%2Foptim/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/kthohr","download_url":"https://codeload.github.com/kthohr/optim/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248599297,"owners_count":21131257,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["armadillo","automatic-differentiation","bfgs","cpp","cpp11","differential-evolution","eigen","eigen3","evolutionary-algorithms","gradient-descent","lbfgs","newton","numerical-optimization-methods","openmp","openmp-parallelization","optim","optimization","optimization-algorithms","particle-swarm-optimization"],"created_at":"2024-09-24T20:58:10.451Z","updated_at":"2025-04-12T16:42:33.421Z","avatar_url":"https://github.com/kthohr.png","language":"C++","readme":"# OptimLib \u0026nbsp; [![Build Status](https://github.com/kthohr/optim/actions/workflows/main.yml/badge.svg)](https://github.com/kthohr/optim/actions/workflows/main.yml) [![Coverage Status](https://codecov.io/github/kthohr/optim/coverage.svg?branch=master)](https://codecov.io/github/kthohr/optim?branch=master) [![License](https://img.shields.io/badge/Licence-Apache%202.0-blue.svg)](./LICENSE) [![Documentation Status](https://readthedocs.org/projects/optimlib/badge/?version=latest)](https://optimlib.readthedocs.io/en/latest/?badge=latest)\n\nOptimLib is a lightweight C++ library of numerical optimization methods for nonlinear functions.\n\nFeatures:\n\n* A C++11/14/17 library of local and global optimization algorithms, as well as root finding techniques.\n* Derivative-free optimization using advanced, parallelized metaheuristic methods.\n* Constrained optimization routines to handle simple box constraints, as well as systems of nonlinear constraints.\n* For fast and efficient matrix-based computation, OptimLib supports the following templated linear algebra libraries:\n  * [Armadillo](http://arma.sourceforge.net/)\n  * [Eigen](http://eigen.tuxfamily.org/index.php) (version \u003e= 3.4.0)\n* Automatic differentiation functionality is available through use of the [Autodiff library](https://autodiff.github.io)\n* OpenMP-accelerated algorithms for parallel computation. \n* Straightforward linking with parallelized BLAS libraries, such as [OpenBLAS](https://github.com/xianyi/OpenBLAS).\n* Available as a single precision (``float``) or double precision (``double``) library.\n* Available as a header-only library, or as a compiled shared library.\n* Released under a permissive, non-GPL license.\n\n### Contents:\n\n* [Algorithms](#algorithms)\n* [Documentation](#documentation)\n* [General API](#api)\n* [Installation](#installation)\n* [R Compatibility](#r-compatibility)\n* [Examples](#examples)\n* [Automatic Differentiation](#automatic-differentiation)\n* [Author and License](#author)\n\n## Algorithms\n\nA list of currently available algorithms includes:\n\n* Broyden's Method (for root finding)\n* Newton's method, BFGS, and L-BFGS\n* Gradient descent: basic, momentum, Adam, AdaMax, Nadam, NadaMax, and more\n* Nonlinear Conjugate Gradient\n* Nelder-Mead\n* Differential Evolution (DE)\n* Particle Swarm Optimization (PSO)\n\n## Documentation\n\nFull documentation is available online:\n\n[![Documentation Status](https://readthedocs.org/projects/optimlib/badge/?version=latest)](https://optimlib.readthedocs.io/en/latest/?badge=latest)\n\nA PDF version of the documentation is available [here](https://buildmedia.readthedocs.org/media/pdf/optimlib/latest/optimlib.pdf).\n\n## API\n\nThe OptimLib API follows a relatively simple convention, with most algorithms called in the following manner:\n```\nalgorithm_id(\u003cinitial/final values\u003e, \u003cobjective function\u003e, \u003cobjective function data\u003e);\n```\nThe inputs, in order, are:\n* A writable vector of initial values to define the starting point of the algorithm. In the event of successful completion, the initial values will be overwritten by the solution vector.\n* The 'objective function' is the user-defined function to be minimized (or zeroed-out in the case of root finding methods).\n* The final input is optional: it is any object that contains additional parameters necessary to evaluate the objective function.\n\nFor example, the BFGS algorithm is called using\n```cpp\nbfgs(ColVec_t\u0026 init_out_vals, std::function\u003cdouble (const ColVec_t\u0026 vals_inp, ColVec_t* grad_out, void* opt_data)\u003e opt_objfn, void* opt_data);\n```\n\nwhere ``ColVec_t`` is used to represent, e.g., ``arma::vec`` or ``Eigen::VectorXd`` types.\n\n## Installation\n\nOptimLib is available as a compiled shared library, or as header-only library, for Unix-alike systems only (e.g., popular Linux-based distros, as well as macOS). Use of this library with Windows-based systems, with or without MSVC, **is not supported**.\n\n### Requirements\n\nOptimLib requires either the Armadillo or Eigen C++ linear algebra libraries. (Note that Eigen version 3.4.0 requires a C++14-compatible compiler.)\n\nBefore including the header files, define **one** of the following:\n``` cpp\n#define OPTIM_ENABLE_ARMA_WRAPPERS\n#define OPTIM_ENABLE_EIGEN_WRAPPERS\n```\n\nExample:\n``` cpp\n#define OPTIM_ENABLE_EIGEN_WRAPPERS\n#include \"optim.hpp\"\n```\n\n### Installation Method 1: Shared Library\n\nThe library can be installed on Unix-alike systems via the standard `./configure \u0026\u0026 make` method.\n\nFirst clone the library and any necessary submodules:\n\n``` bash\n# clone optim into the current directory\ngit clone https://github.com/kthohr/optim ./optim\n\n# change directory\ncd ./optim\n\n# clone necessary submodules\ngit submodule update --init\n```\n\nSet (one) of the following environment variables *before* running `configure`:\n\n``` bash\nexport ARMA_INCLUDE_PATH=/path/to/armadillo\nexport EIGEN_INCLUDE_PATH=/path/to/eigen\n```\n\nFinally:\n\n``` bash\n# build and install with Eigen\n./configure -i \"/usr/local\" -l eigen -p\nmake\nmake install\n```\n\nThe final command will install OptimLib into `/usr/local`.\n\nConfiguration options (see `./configure -h`):\n\n\u0026nbsp; \u0026nbsp; \u0026nbsp; **Primary**\n* `-h` print help\n* `-i` installation path; default: the build directory\n* `-f` floating-point precision mode; default: `double`\n* `-l` specify the choice of linear algebra library; choose `arma` or `eigen`\n* `-m` specify the BLAS and Lapack libraries to link with; for example, `-m \"-lopenblas\"` or `-m \"-framework Accelerate\"`\n* `-o` compiler optimization options; defaults to `-O3 -march=native -ffp-contract=fast -flto -DARMA_NO_DEBUG`\n* `-p` enable OpenMP parallelization features (*recommended*)\n\n\u0026nbsp; \u0026nbsp; \u0026nbsp; **Secondary**\n* `-c` a coverage build (used with Codecov)\n* `-d` a 'development' build\n* `-g` a debugging build (optimization flags set to `-O0 -g`)\n\n\u0026nbsp; \u0026nbsp; \u0026nbsp; **Special**\n* `--header-only-version` generate a header-only version of OptimLib (see [below](#installation-method-2-header-only-library))\n\u003c!-- * `-R` RcppArmadillo compatible build by setting the appropriate R library directories (R, Rcpp, and RcppArmadillo) --\u003e\n\n## Installation Method 2: Header-only Library\n\nOptimLib is also available as a header-only library (i.e., without the need to compile a shared library). Simply run `configure` with the `--header-only-version` option:\n\n```bash\n./configure --header-only-version\n```\n\nThis will create a new directory, `header_only_version`, containing a copy of OptimLib, modified to work on an inline basis. With this header-only version, simply include the header files (`#include \"optim.hpp`) and set the include path to the `head_only_version` directory (e.g.,`-I/path/to/optimlib/header_only_version`).\n\n## R Compatibility\n\nTo use OptimLib with an R package, first generate a header-only version of the library (see [above](#installation-method-2-header-only-library)). Then simply add a compiler definition before including the OptimLib files.\n\n* For RcppArmadillo:\n```cpp\n#define OPTIM_USE_RCPP_ARMADILLO\n#include \"optim.hpp\"\n```\n\n* For RcppEigen:\n```cpp\n#define OPTIM_USE_RCPP_EIGEN\n#include \"optim.hpp\"\n```\n\n## Examples\n\nTo illustrate OptimLib at work, consider searching for the global minimum of the [Ackley function](https://en.wikipedia.org/wiki/Ackley_function):\n\n![Ackley](https://github.com/kthohr/kthohr.github.io/blob/master/pics/ackley_fn_3d.png)\n\nThis is a well-known test function with many local minima. Newton-type methods (such as BFGS) are sensitive to the choice of initial values, and will perform rather poorly here. As such, we will employ a global search method--in this case: Differential Evolution.\n\nCode:\n\n``` cpp\n#define OPTIM_ENABLE_EIGEN_WRAPPERS\n#include \"optim.hpp\"\n        \n#define OPTIM_PI 3.14159265358979\n\ndouble \nackley_fn(const Eigen::VectorXd\u0026 vals_inp, Eigen::VectorXd* grad_out, void* opt_data)\n{\n    const double x = vals_inp(0);\n    const double y = vals_inp(1);\n\n    const double obj_val = 20 + std::exp(1) - 20*std::exp( -0.2*std::sqrt(0.5*(x*x + y*y)) ) - std::exp( 0.5*(std::cos(2 * OPTIM_PI * x) + std::cos(2 * OPTIM_PI * y)) );\n            \n    return obj_val;\n}\n        \nint main()\n{\n    Eigen::VectorXd x = 2.0 * Eigen::VectorXd::Ones(2); // initial values: (2,2)\n        \n    bool success = optim::de(x, ackley_fn, nullptr);\n        \n    if (success) {\n        std::cout \u003c\u003c \"de: Ackley test completed successfully.\" \u003c\u003c std::endl;\n    } else {\n        std::cout \u003c\u003c \"de: Ackley test completed unsuccessfully.\" \u003c\u003c std::endl;\n    }\n        \n    std::cout \u003c\u003c \"de: solution to Ackley test:\\n\" \u003c\u003c x \u003c\u003c std::endl;\n        \n    return 0;\n}\n```\n\nOn x86-based computers, this example can be compiled using:\n\n``` bash\ng++ -Wall -std=c++14 -O3 -march=native -ffp-contract=fast -I/path/to/eigen -I/path/to/optim/include optim_de_ex.cpp -o optim_de_ex.out -L/path/to/optim/lib -loptim\n```\n\nOutput:\n\n```\nde: Ackley test completed successfully.\nelapsed time: 0.028167s\n\nde: solution to Ackley test:\n  -1.2702e-17\n  -3.8432e-16\n```\n\nOn a standard laptop, OptimLib will compute a solution to within machine precision in a fraction of a second.\n\nThe Armadillo-based version of this example:\n\n``` cpp\n#define OPTIM_ENABLE_ARMA_WRAPPERS\n#include \"optim.hpp\"\n        \n#define OPTIM_PI 3.14159265358979\n\ndouble \nackley_fn(const arma::vec\u0026 vals_inp, arma::vec* grad_out, void* opt_data)\n{\n    const double x = vals_inp(0);\n    const double y = vals_inp(1);\n\n    const double obj_val = 20 + std::exp(1) - 20*std::exp( -0.2*std::sqrt(0.5*(x*x + y*y)) ) - std::exp( 0.5*(std::cos(2 * OPTIM_PI * x) + std::cos(2 * OPTIM_PI * y)) );\n            \n    return obj_val;\n}\n        \nint main()\n{\n    arma::vec x = arma::ones(2,1) + 1.0; // initial values: (2,2)\n        \n    bool success = optim::de(x, ackley_fn, nullptr);\n        \n    if (success) {\n        std::cout \u003c\u003c \"de: Ackley test completed successfully.\" \u003c\u003c std::endl;\n    } else {\n        std::cout \u003c\u003c \"de: Ackley test completed unsuccessfully.\" \u003c\u003c std::endl;\n    }\n        \n    arma::cout \u003c\u003c \"de: solution to Ackley test:\\n\" \u003c\u003c x \u003c\u003c arma::endl;\n        \n    return 0;\n}\n```\n\nCompile and run:\n\n``` bash\ng++ -Wall -std=c++11 -O3 -march=native -ffp-contract=fast -I/path/to/armadillo -I/path/to/optim/include optim_de_ex.cpp -o optim_de_ex.out -L/path/to/optim/lib -loptim\n./optim_de_ex.out\n```\n\nCheck the `/tests` directory for additional examples, and https://optimlib.readthedocs.io/en/latest/ for a detailed description of each algorithm.\n\n### Logistic regression\n\nFor a data-based example, consider maximum likelihood estimation of a logit model, common in statistics and machine learning. In this case we have closed-form expressions for the gradient and hessian. We will employ a popular gradient descent method, Adam (Adaptive Moment Estimation), and compare to a pure Newton-based algorithm.\n\n``` cpp\n#define OPTIM_ENABLE_ARMA_WRAPPERS\n#include \"optim.hpp\"\n\n// sigmoid function\n\ninline\narma::mat sigm(const arma::mat\u0026 X)\n{\n    return 1.0 / (1.0 + arma::exp(-X));\n}\n\n// log-likelihood function data\n\nstruct ll_data_t\n{\n    arma::vec Y;\n    arma::mat X;\n};\n\n// log-likelihood function with hessian\n\ndouble ll_fn_whess(const arma::vec\u0026 vals_inp, arma::vec* grad_out, arma::mat* hess_out, void* opt_data)\n{\n    ll_data_t* objfn_data = reinterpret_cast\u003cll_data_t*\u003e(opt_data);\n\n    arma::vec Y = objfn_data-\u003eY;\n    arma::mat X = objfn_data-\u003eX;\n\n    arma::vec mu = sigm(X*vals_inp);\n\n    const double norm_term = static_cast\u003cdouble\u003e(Y.n_elem);\n\n    const double obj_val = - arma::accu( Y%arma::log(mu) + (1.0-Y)%arma::log(1.0-mu) ) / norm_term;\n\n    //\n\n    if (grad_out)\n    {\n        *grad_out = X.t() * (mu - Y) / norm_term;\n    }\n\n    //\n\n    if (hess_out)\n    {\n        arma::mat S = arma::diagmat( mu%(1.0-mu) );\n        *hess_out = X.t() * S * X / norm_term;\n    }\n\n    //\n\n    return obj_val;\n}\n\n// log-likelihood function for Adam\n\ndouble ll_fn(const arma::vec\u0026 vals_inp, arma::vec* grad_out, void* opt_data)\n{\n    return ll_fn_whess(vals_inp,grad_out,nullptr,opt_data);\n}\n\n//\n\nint main()\n{\n    int n_dim = 5;     // dimension of parameter vector\n    int n_samp = 4000; // sample length\n\n    arma::mat X = arma::randn(n_samp,n_dim);\n    arma::vec theta_0 = 1.0 + 3.0*arma::randu(n_dim,1);\n\n    arma::vec mu = sigm(X*theta_0);\n\n    arma::vec Y(n_samp);\n\n    for (int i=0; i \u003c n_samp; i++)\n    {\n        Y(i) = ( arma::as_scalar(arma::randu(1)) \u003c mu(i) ) ? 1.0 : 0.0;\n    }\n\n    // fn data and initial values\n\n    ll_data_t opt_data;\n    opt_data.Y = std::move(Y);\n    opt_data.X = std::move(X);\n\n    arma::vec x = arma::ones(n_dim,1) + 1.0; // initial values\n\n    // run Adam-based optim\n\n    optim::algo_settings_t settings;\n\n    settings.gd_method = 6;\n    settings.gd_settings.step_size = 0.1;\n\n    std::chrono::time_point\u003cstd::chrono::system_clock\u003e start = std::chrono::system_clock::now();\n\n    bool success = optim::gd(x,ll_fn,\u0026opt_data,settings);\n\n    std::chrono::time_point\u003cstd::chrono::system_clock\u003e end = std::chrono::system_clock::now();\n    std::chrono::duration\u003cdouble\u003e elapsed_seconds = end-start;\n\n    //\n\n    if (success) {\n        std::cout \u003c\u003c \"Adam: logit_reg test completed successfully.\\n\"\n                  \u003c\u003c \"elapsed time: \" \u003c\u003c elapsed_seconds.count() \u003c\u003c \"s\\n\";\n    } else {\n        std::cout \u003c\u003c \"Adam: logit_reg test completed unsuccessfully.\" \u003c\u003c std::endl;\n    }\n\n    arma::cout \u003c\u003c \"\\nAdam: true values vs estimates:\\n\" \u003c\u003c arma::join_rows(theta_0,x) \u003c\u003c arma::endl;\n\n    //\n    // run Newton-based optim\n\n    x = arma::ones(n_dim,1) + 1.0; // initial values\n\n    start = std::chrono::system_clock::now();\n\n    success = optim::newton(x,ll_fn_whess,\u0026opt_data);\n\n    end = std::chrono::system_clock::now();\n    elapsed_seconds = end-start;\n\n    //\n\n    if (success) {\n        std::cout \u003c\u003c \"newton: logit_reg test completed successfully.\\n\"\n                  \u003c\u003c \"elapsed time: \" \u003c\u003c elapsed_seconds.count() \u003c\u003c \"s\\n\";\n    } else {\n        std::cout \u003c\u003c \"newton: logit_reg test completed unsuccessfully.\" \u003c\u003c std::endl;\n    }\n\n    arma::cout \u003c\u003c \"\\nnewton: true values vs estimates:\\n\" \u003c\u003c arma::join_rows(theta_0,x) \u003c\u003c arma::endl;\n\n    return 0;\n}\n```\nOutput:\n```\nAdam: logit_reg test completed successfully.\nelapsed time: 0.025128s\n\nAdam: true values vs estimates:\n   2.7850   2.6993\n   3.6561   3.6798\n   2.3379   2.3860\n   2.3167   2.4313\n   2.2465   2.3064\n\nnewton: logit_reg test completed successfully.\nelapsed time: 0.255909s\n\nnewton: true values vs estimates:\n   2.7850   2.6993\n   3.6561   3.6798\n   2.3379   2.3860\n   2.3167   2.4313\n   2.2465   2.3064\n```\n\n## Automatic Differentiation\n\nBy combining Eigen with the [Autodiff library](https://autodiff.github.io), OptimLib provides experimental support for automatic differentiation. \n\nExample using forward-mode automatic differentiation with BFGS for the Sphere function:\n\n``` cpp\n#define OPTIM_ENABLE_EIGEN_WRAPPERS\n#include \"optim.hpp\"\n\n#include \u003cautodiff/forward/real.hpp\u003e\n#include \u003cautodiff/forward/real/eigen.hpp\u003e\n\n//\n\nautodiff::real\nopt_fnd(const autodiff::ArrayXreal\u0026 x)\n{\n    return x.cwiseProduct(x).sum();\n}\n\ndouble\nopt_fn(const Eigen::VectorXd\u0026 x, Eigen::VectorXd* grad_out, void* opt_data)\n{\n    autodiff::real u;\n    autodiff::ArrayXreal xd = x.eval();\n\n    if (grad_out) {\n        Eigen::VectorXd grad_tmp = autodiff::gradient(opt_fnd, autodiff::wrt(xd), autodiff::at(xd), u);\n\n        *grad_out = grad_tmp;\n    } else {\n        u = opt_fnd(xd);\n    }\n\n    return u.val();\n}\n\nint main()\n{\n    Eigen::VectorXd x(5);\n    x \u003c\u003c 1, 2, 3, 4, 5;\n\n    bool success = optim::bfgs(x, opt_fn, nullptr);\n\n    if (success) {\n        std::cout \u003c\u003c \"bfgs: forward-mode autodiff test completed successfully.\\n\" \u003c\u003c std::endl;\n    } else {\n        std::cout \u003c\u003c \"bfgs: forward-mode autodiff test completed unsuccessfully.\\n\" \u003c\u003c std::endl;\n    }\n\n    std::cout \u003c\u003c \"solution: x = \\n\" \u003c\u003c x \u003c\u003c std::endl;\n\n    return 0;\n}\n```\n\nCompile with:\n\n``` bash\ng++ -Wall -std=c++17 -O3 -march=native -ffp-contract=fast -I/path/to/eigen -I/path/to/autodiff -I/path/to/optim/include optim_autodiff_ex.cpp -o optim_autodiff_ex.out -L/path/to/optim/lib -loptim\n```\n\nSee the [documentation](https://optimlib.readthedocs.io/en/latest/autodiff.html) for more details on this topic.\n\n## Author\n\nKeith O'Hara\n\n## License\n\nApache Version 2\n","funding_links":[],"categories":["8. Tutorials"],"sub_categories":["8.4 Optimization Techniques"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fkthohr%2Foptim","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fkthohr%2Foptim","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fkthohr%2Foptim/lists"}