{"id":22301695,"url":"https://github.com/jhorzek/lesssem","last_synced_at":"2025-07-29T02:32:45.470Z","repository":{"id":151204784,"uuid":"478142044","full_name":"jhorzek/lessSEM","owner":"jhorzek","description":"lessSEM estimates sparse structural equation models.","archived":false,"fork":false,"pushed_at":"2024-07-21T20:30:56.000Z","size":15348,"stargazers_count":7,"open_issues_count":8,"forks_count":1,"subscribers_count":3,"default_branch":"main","last_synced_at":"2024-11-28T16:00:19.001Z","etag":null,"topics":["lasso","psychometrics","r","regularization","regularized-structural-equation-model","sem","structural-equation-modeling"],"latest_commit_sha":null,"homepage":"https://jhorzek.github.io/lessSEM/","language":"R","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"gpl-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/jhorzek.png","metadata":{"files":{"readme":"README.Rmd","changelog":"NEWS.md","contributing":null,"funding":null,"license":"LICENSE.md","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2022-04-05T13:30:25.000Z","updated_at":"2024-07-19T00:10:30.000Z","dependencies_parsed_at":"2023-11-22T07:30:00.131Z","dependency_job_id":"c0fc1397-45df-40f6-b706-3e0593cfe0ff","html_url":"https://github.com/jhorzek/lessSEM","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jhorzek%2FlessSEM","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jhorzek%2FlessSEM/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jhorzek%2FlessSEM/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jhorzek%2FlessSEM/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/jhorzek","download_url":"https://codeload.github.com/jhorzek/lessSEM/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":227973511,"owners_count":17849727,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["lasso","psychometrics","r","regularization","regularized-structural-equation-model","sem","structural-equation-modeling"],"created_at":"2024-12-03T18:23:54.311Z","updated_at":"2024-12-03T18:23:55.203Z","avatar_url":"https://github.com/jhorzek.png","language":"R","readme":"---\noutput: github_document\n---\n\n\u003c!-- README.md is generated from README.Rmd. Please edit that file --\u003e\n\n```{r, include = FALSE}\nknitr::opts_chunk$set(\n  collapse = TRUE,\n  comment = \"#\u003e\",\n  fig.path = \"man/figures/README-\",\n  out.width = \"100%\"\n)\nset.seed(123)\n```\n\n# lessSEM\n\n\u003c!-- badges: start --\u003e\n[![Total Downloads](https://cranlogs.r-pkg.org/badges/grand-total/lessSEM)](https://cranlogs.r-pkg.org/badges/grand-total/lessSEM)\n\u003c!-- badges: end --\u003e\n\n**lessSEM** (**l**essSEM **es**timates **s**parse **SEM**) is an R package for \nregularized structural equation modeling (regularized SEM) with non-smooth \npenalty functions (e.g., lasso) building on [**lavaan**](https://github.com/yrosseel/lavaan). \n**lessSEM** is heavily inspired by the [**regsem**](https://github.com/Rjacobucci/regsem) package \nand the [**lslx**](https://github.com/psyphh/lslx) packages that have similar functionality.\n**If you use lessSEM, please also cite [regsem](https://github.com/Rjacobucci/regsem) and \nand [lslx](https://github.com/psyphh/lslx)!**\n\nThe objectives of **lessSEM** are to provide ...\n\n1. a flexible framework for regularizing SEM.\n2. optimizers for other packages that can handle non-differentiable penalty functions.\n\nThe following penalty functions are currently implemented in **lessSEM**:\n\n![](man/figures/penaltyFunctions.png)\n\nThe column \"penalty\" refers to the name of the function call in the **lessSEM** package \n(e.g., lasso is called with the `lasso()` function). \n\nThe best model can be selected with the \nAIC or BIC. If you want to use cross-validation, use `cvLasso`, `cvAdaptiveLasso`,\netc. instead (see, e.g.,  `?lessSEM::cvLasso`).\n\n## [**regsem**](https://github.com/Rjacobucci/regsem), [**lslx**](https://github.com/psyphh/lslx), and **lessSEM**\n\nThe packages [**regsem**](https://github.com/Rjacobucci/regsem), \n[**lslx**](https://github.com/psyphh/lslx), and **lessSEM** can all be used to\nregularize basic SEM. In fact, as outlined above, **lessSEM** is heavily inspired\nby [**regsem**](https://github.com/Rjacobucci/regsem)\nand [**lslx**](https://github.com/psyphh/lslx). However, the packages differ in their targets: The objective\nof **lessSEM** is not to replace the more mature packages [**regsem**](https://github.com/Rjacobucci/regsem)\nand [**lslx**](https://github.com/psyphh/lslx). Instead, our objective is to\nprovide method developers with a flexible framework for regularized SEM.\nThe following shows an incomplete comparison of some features implemented in\nthe three packages:\n\n\n| | **regsem** | **lslx** | **lessSEM** |\n| --- | --- | --- | --- |\n| Model specification | based on lavaan | similar to lavaan | based on lavaan |\n| Maximum likelihood estimation | Yes | Yes | Yes |\n| Least squares estimation | No | Yes | Dev. |\n| Categorical variables | No | Yes | No |\n| Confidence Intervals | No | Yes | No |\n| Missing Data | FIML | Auxiliary Variables | FIML |\n| Multi-group models | No | Yes | Yes |\n| Stability selection | Yes | No | Dev. |\n| Mixed penalties | No | No | Yes |\n| Equality constraints | Yes | No | Yes |\n| Parameter transformations | diff_lasso | No | Yes |\n| Definition variables | No | No | Yes |\n\n\u003e **Warning**\n\u003e Dev. refers to features that are supported, but still under development and\nmay have bugs. Use with caution!\n\n# Installation\n\nIf you want to install **lessSEM** from CRAN, use the following commands in R:\n\n```{r, eval = FALSE}\ninstall.packages(\"lessSEM\")\n```\n\nThe newest version of the package can be installed from GitHub. However, because the\nproject uses submodules, `devtools::install_github` does not download the entire \nR package. To download the development version, the following command needs to\nbe run in git: \n\n```\ngit clone --branch development --recurse-submodules https://github.com/jhorzek/lessSEM.git\n```\nNavigate to the folder to which git copied the project and install the package\nwith the lessSEM.Rproj file.\n\nIf you want to download the main branch, use\n\n```\ngit clone --recurse-submodules https://github.com/jhorzek/lessSEM.git\n```\n\n\u003e **Note**\n\u003e The lessSEM project has multiple branches. The **main**\nbranch will match the version currently available from CRAN. The **development** branch\nwill have newer features not yet available from CRAN. This branch will have passed all\ncurrent tests of our test suite, but may not be ready for CRAN yet (e.g., because\nnot all objectives of the road map have been met). **gh-pages** is used to create \nthe [documentation website](https://jhorzek.github.io/lessSEM/). Finally, all other branches\nare used for ongoing development and should be considered unstable.\n\n# Introduction\n\nPlease visit the [lessSEM website](https://jhorzek.github.io/lessSEM/) for \nthe latest documentation. You will also find a short introduction to regularized \nSEM in `vignette('lessSEM', package = 'lessSEM')`and the documentation of the \nindividual functions (e.g., see `?lessSEM::scad`).\nFinally, you will find templates for a selection of models that can be used with **lessSEM**\n(e.g., the cross-lagged panel model) in the package [**lessTemplates**](https://github.com/jhorzek/lessTemplates).\n\n# Example\n\n```{r, results = 'hide', fig.show='hide', message=FALSE, warning=FALSE}\nlibrary(lessSEM)\nlibrary(lavaan)\n\n# Identical to regsem, lessSEM builds on the lavaan\n# package for model specification. The first step\n# therefore is to implement the model in lavaan.\n\ndataset \u003c- simulateExampleData()\n\nlavaanSyntax \u003c- \"\n      f =~ l1*y1 + l2*y2 + l3*y3 + l4*y4 + l5*y5 + \n           l6*y6 + l7*y7 + l8*y8 + l9*y9 + l10*y10 + \n           l11*y11 + l12*y12 + l13*y13 + l14*y14 + l15*y15\n      f ~~ 1*f\n      \"\n\nlavaanModel \u003c- lavaan::sem(lavaanSyntax,\n                           data = dataset,\n                           meanstructure = TRUE,\n                           std.lv = TRUE)\n\n# Optional: Plot the model\n# if(!require(\"semPlot\")) install.packages(\"semPlot\")\n# semPlot::semPaths(lavaanModel, \n#                   what = \"est\",\n#                   fade = FALSE)\n\nlsem \u003c- lasso(\n  # pass the fitted lavaan model\n  lavaanModel = lavaanModel,\n  # names of the regularized parameters:\n  regularized = c(\"l6\", \"l7\", \"l8\", \"l9\", \"l10\",\n                  \"l11\", \"l12\", \"l13\", \"l14\", \"l15\"),\n  # in case of lasso and adaptive lasso, we can specify the number of lambda\n  # values to use. lessSEM will automatically find lambda_max and fit\n  # models for nLambda values between 0 and lambda_max. For the other\n  # penalty functions, lambdas must be specified explicitly\n  nLambdas = 50)\n\n# use the plot-function to plot the regularized parameters:\nplot(lsem)\n\n# use the coef-function to show the estimates\ncoef(lsem)\n\n# the best parameters can be extracted with:\ncoef(lsem, criterion = \"AIC\")\ncoef(lsem, criterion = \"BIC\")\n\n# if you just want the estimates, use estimates():\nestimates(lsem, criterion = \"AIC\")\n\n# elements of lsem can be accessed with the @ operator:\nlsem@parameters[1,]\n\n# AIC and BIC for all tuning parameter configurations:\nAIC(lsem)\nBIC(lsem)\n\n# cross-validation\ncv \u003c- cvLasso(lavaanModel = lavaanModel,\n              regularized = c(\"l6\", \"l7\", \"l8\", \"l9\", \"l10\",\n                              \"l11\", \"l12\", \"l13\", \"l14\", \"l15\"),\n              lambdas = seq(0,1,.1),\n              standardize = TRUE)\n\n# get best model according to cross-validation:\ncoef(cv)\n\n#### Advanced ####\n# Switching the optimizer:\n# Use the \"method\" argument to switch the optimizer. The control argument\n# must also be changed to the corresponding function:\nlsemIsta \u003c- lasso(\n  lavaanModel = lavaanModel,\n  regularized = paste0(\"l\", 6:15),\n  nLambdas = 50,\n  method = \"ista\",\n  control = controlIsta(\n    # Here, we can also specify that we want to use multiple cores:\n    nCores = 2))\n\n# Note: The results are basically identical:\nlsemIsta@parameters - lsem@parameters\n```\n\nIf you want to regularize all loadings, regressions, variances, or covariances, you\ncan also use one of the helper functions to extract the respective parameter labels\nfrom **lavaan** and then pass these to **lessSEM**:\n\n```{r}\nloadings(lavaanModel)\nregressions(lavaanModel)\nvariances(lavaanModel)\ncovariances(lavaanModel)\n```\n\n\n\n# Transformations\n\n**lessSEM** allows for parameter transformations that could, for instance, be used to test\nmeasurement invariance in longitudinal models (e.g., Liang, 2018; Bauer et al., 2020).\nA thorough introduction is provided in ``vignette('Parameter-transformations', package = 'lessSEM')``. \nAs an example, we will test measurement invariance in the `PoliticalDemocracy`\ndata set.\n\n```{r, results = 'hide', fig.show='hide', message=FALSE}\nlibrary(lessSEM)\nlibrary(lavaan)\n# we will use the PoliticalDemocracy from lavaan (see ?lavaan::sem)\nmodel \u003c- ' \n  # latent variable definitions\n     ind60 =~ x1 + x2 + x3\n     # assuming different loadings for different time points:\n     dem60 =~ y1 + a1*y2 + b1*y3 + c1*y4\n     dem65 =~ y5 + a2*y6 + b2*y7 + c2*y8\n\n  # regressions\n    dem60 ~ ind60\n    dem65 ~ ind60 + dem60\n\n  # residual correlations\n    y1 ~~ y5\n    y2 ~~ y4 + y6\n    y3 ~~ y7\n    y4 ~~ y8\n    y6 ~~ y8\n'\n\nfit \u003c- sem(model, data = PoliticalDemocracy)\n\n# We will define a transformation which regularizes differences\n# between loadings over time:\n\ntransformations \u003c- \"\n// which parameters do we want to use?\nparameters: a1, a2, b1, b2, c1, c2, delta_a2, delta_b2, delta_c2\n\n// transformations:\na2 = a1 + delta_a2;\nb2 = b1 + delta_b2;\nc2 = c1 + delta_c2;\n\"\n\n# setting delta_a2, delta_b2, or delta_c2 to zero implies measurement invariance\n# for the respective parameters (a1, b1, c1)\nlassoFit \u003c- lasso(lavaanModel = fit, \n                  # we want to regularize the differences between the parameters\n                  regularized = c(\"delta_a2\", \"delta_b2\", \"delta_c2\"),\n                  nLambdas = 100,\n                  # Our model modification must make use of the modifyModel - function:\n                  modifyModel = modifyModel(transformations = transformations)\n)\n```\n\nFinally, we can extract the best parameters:\n\n```{r, results = 'hide', message=FALSE}\ncoef(lassoFit, criterion = \"BIC\")\n```\n\nAs all differences (`delta_a2`, `delta_b2`, and `delta_c2`) have been zeroed, we can \nassume measurement invariance.\n\n# Experimental Features\n\nThe following features are relatively new and you may still experience some bugs.\nPlease be aware of that when using these features.\n\n## From **lessSEM** to **lavaan**\n\n**lessSEM** supports exporting specific models to **lavaan**. This can be very useful when plotting the \nfinal model. \n\n```{r, results = 'hide', message=FALSE}\nlavaanModel \u003c- lessSEM2Lavaan(regularizedSEM = lsem, \n                              criterion = \"BIC\")\n```\n\nThe result can be plotted with, for instance, [**semPlot**](https://github.com/SachaEpskamp/semPlot):\n```{r, results = 'hide', fig.show='hide', message=FALSE}\nlibrary(semPlot)\nsemPaths(lavaanModel,\n         what = \"est\",\n         fade = FALSE)\n```\n\n## Multi-Group Models and Definition Variables\n\n**lessSEM** supports multi-group SEM and, to some degree, definition variables.\nRegularized multi-group SEM have been proposed by Huang (2018) and are \nimplemented in **lslx** (Huang, 2020). Here, differences between groups are regularized.\nA detailed introduction can be found in \n`vignette(topic = \"Definition-Variables-and-Multi-Group-SEM\", package = \"lessSEM\")`.\nTherein it is also explained how the multi-group SEM can be used to implement\ndefinition variables (e.g., for latent growth curve models).\n\n## Mixed Penalties\n\n**lessSEM** allows for defining different penalties for different parts\nof the model. This feature is new and very experimental. Please keep that\nin mind when using the procedure. A detailed introduction\ncan be found in `vignette(topic = \"Mixed-Penalties\", package = \"lessSEM\")`.\n\nTo provide a short example, we will regularize the loadings and the regression\nparameters of the Political Democracy data set with different penalties. The \nfollowing script is adapted from `?lavaan::sem`.\n\n```{r, results = 'hide', message=FALSE}\nmodel \u003c- ' \n  # latent variable definitions\n     ind60 =~ x1 + x2 + x3 + c2*y2 + c3*y3 + c4*y4\n     dem60 =~ y1 + y2 + y3 + y4\n     dem65 =~ y5 + y6 + y7 + c*y8\n\n  # regressions\n    dem60 ~ r1*ind60\n    dem65 ~ r2*ind60 + r3*dem60\n'\n\nlavaanModel \u003c- sem(model,\n                   data = PoliticalDemocracy)\n\n# Let's add a lasso penalty on the cross-loadings c2 - c4 and \n# scad penalty on the regressions r1-r3\nfitMp \u003c- lavaanModel |\u003e\n  mixedPenalty() |\u003e\n  addLasso(regularized = c(\"c2\", \"c3\", \"c4\"), \n           lambdas = seq(0,1,.1)) |\u003e\n  addScad(regularized = c(\"r1\", \"r2\", \"r3\"), \n          lambdas = seq(0,1,.2),\n          thetas = 3.7) |\u003e\n  fit()\n```\n\nThe best model according to the BIC can be extracted with:\n\n```{r, results = 'hide', message=FALSE}\ncoef(fitMp, criterion = \"BIC\")\n```\n\n# Optimizers\n\nCurrently, **lessSEM** has the following optimizers:\n\n- (variants of) iterative shrinkage and thresholding (e.g., Beck \u0026 Teboulle, 2009; \nGong et al., 2013; Parikh \u0026 Boyd, 2013); optimization of cappedL1, lsp, scad, \nand mcp is based on Gong et al. (2013)\n- glmnet (Friedman et al., 2010; Yuan et al., 2012; Huang, 2020)\n\nThese optimizers are implemented based on the \n[**regCtsem**](https://github.com/jhorzek/regCtsem) package. Most importantly, \n**all optimizers in lessSEM are available for other packages.** \nThere are four ways to implement them which are documented in \n`vignette(\"General-Purpose-Optimization\", package = \"lessSEM\")`.\nIn short, these are:\n\n1. using the R interface: All general purpose implementations of the functions \nare called with prefix \"gp\" (`gpLasso`, `gpScad`, ...). More information and \nexamples can be found in the documentation of these functions (e.g., `?lessSEM::gpLasso`, \n`?lessSEM::gpAdaptiveLasso`, `?lessSEM::gpElasticNet`). The interface is similar to \nthe optim optimizers in R. \n2. using Rcpp, we can pass C++ function pointers to the general purpose optimizers\n`gpLassoCpp`, `gpScadCpp`, ... (e.g., `?lessSEM::gpLassoCpp`)\n3. All optimizers are implemented as C++ header-only files in **lessSEM**. Thus, \nthey can be accessed from other packages using C++. The interface is similar \nto that of the [**ensmallen**](https://ensmallen.org/) library. We have implemented\na simple example for elastic net regularization of linear regressions in the \n[**lessLM**](https://github.com/jhorzek/lessLM) package. You can also find more \ndetails on the general design of the optimizer interface in `vignette(\"The-optimizer-interface\", package = \"lessSEM\")`.\n4. The optimizers are implemented in the separate C++ header only library\n[lesstimate](https://jhorzek.github.io/lesstimate/) that can be used as a submodule\nin R packages.\n\n# References\n\n## R - Packages / Software\n\n* [lavaan](https://github.com/yrosseel/lavaan) Rosseel, Y. (2012). lavaan: An R Package for Structural Equation Modeling. Journal of Statistical Software, 48(2), 1-36. https://doi.org/10.18637/jss.v048.i02\n* [regsem](https://github.com/Rjacobucci/regsem): Jacobucci, R. (2017). regsem: \nRegularized Structural Equation Modeling. ArXiv:1703.08489 [Stat]. https://arxiv.org/abs/1703.08489\n* [lslx](https://github.com/psyphh/lslx): Huang, P.-H. (2020). lslx: \nSemi-confirmatory structural equation modeling via penalized likelihood. Journal \nof Statistical Software, 93(7). https://doi.org/10.18637/jss.v093.i07\n* [fasta](https://CRAN.R-project.org/package=fasta): \nAnother implementation of the fista algorithm (Beck \u0026 Teboulle, 2009).\n* [ensmallen](https://ensmallen.org/): Curtin, R. R., Edel, M., Prabhu, R. G., \nBasak, S., Lou, Z., \u0026 Sanderson, C. (2021). The ensmallen library for ﬂexible \nnumerical optimization. Journal of Machine Learning Research, 22, 1–6.\n* [regCtsem](https://github.com/jhorzek/regCtsem): Orzek, J. H., \u0026 Voelkle, M. C. (in press). \nRegularized continuous time structural equation models: A network perspective. Psychological Methods.\n\n\n## Regularized Structural Equation Modeling\n\n* Huang, P.-H., Chen, H., \u0026 Weng, L.-J. (2017). A Penalized Likelihood Method \nfor Structural Equation Modeling. Psychometrika, 82(2), 329–354. https://doi.org/10.1007/s11336-017-9566-9\n* Huang, P.-H. (2018). A penalized likelihood method for multi-group structural equation modelling. British Journal of Mathematical and Statistical Psychology, 71(3), 499–522. https://doi.org/10.1111/bmsp.12130\n* Jacobucci, R., Grimm, K. J., \u0026 McArdle, J. J. (2016). Regularized Structural \nEquation Modeling. Structural Equation Modeling: A Multidisciplinary Journal, 23(4), \n555–566. https://doi.org/10.1080/10705511.2016.1154793\n\n## Penalty Functions\n\n* Candès, E. J., Wakin, M. B., \u0026 Boyd, S. P. (2008). Enhancing Sparsity by \nReweighted l1 Minimization. Journal of Fourier Analysis and Applications, 14(5–6), \n877–905. https://doi.org/10.1007/s00041-008-9045-x\n* Fan, J., \u0026 Li, R. (2001). Variable selection via nonconcave penalized \nlikelihood and its oracle properties. Journal of the American Statistical \nAssociation, 96(456), 1348–1360. https://doi.org/10.1198/016214501753382273\n* Hoerl, A. E., \u0026 Kennard, R. W. (1970). Ridge Regression: Biased Estimation \nfor Nonorthogonal Problems. Technometrics, 12(1), 55–67. https://doi.org/10.1080/00401706.1970.10488634\n* Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. \nJournal of the Royal Statistical Society. Series B (Methodological), 58(1), 267–288.\n* Zhang, C.-H. (2010). Nearly unbiased variable selection under minimax concave penalty. \nThe Annals of Statistics, 38(2), 894–942. https://doi.org/10.1214/09-AOS729\n* Zhang, T. (2010). Analysis of Multi-stage Convex Relaxation for Sparse Regularization. \nJournal of Machine Learning Research, 11, 1081–1107.\n* Zou, H. (2006). The adaptive lasso and its oracle properties. Journal of the \nAmerican Statistical Association, 101(476), 1418–1429. https://doi.org/10.1198/016214506000000735\n* Zou, H., \u0026 Hastie, T. (2005). Regularization and variable selection via the \nelastic net. Journal of the Royal Statistical Society: Series B, 67(2), 301–320. \nhttps://doi.org/10.1111/j.1467-9868.2005.00503.x\n\n## Optimizer\n\n### GLMNET \n\n* Friedman, J., Hastie, T., \u0026 Tibshirani, R. (2010). Regularization paths for \ngeneralized linear models via coordinate descent. Journal of Statistical \nSoftware, 33(1), 1–20. https://doi.org/10.18637/jss.v033.i01\n* Yuan, G.-X., Ho, C.-H., \u0026 Lin, C.-J. (2012). An improved GLMNET for \nl1-regularized logistic regression. The Journal of Machine Learning Research, \n13, 1999–2030. https://doi.org/10.1145/2020408.2020421\n\n### Variants of ISTA\n\n* Beck, A., \u0026 Teboulle, M. (2009). A Fast Iterative Shrinkage-Thresholding \nAlgorithm for Linear Inverse Problems. SIAM Journal on Imaging Sciences, 2(1), \n183–202. https://doi.org/10.1137/080716542\n* Gong, P., Zhang, C., Lu, Z., Huang, J., \u0026 Ye, J. (2013). A general iterative \nshrinkage and thresholding algorithm for non-convex regularized optimization problems. \nProceedings of the 30th International Conference on Machine Learning, 28(2)(2), 37–45.\n* Parikh, N., \u0026 Boyd, S. (2013). Proximal Algorithms. Foundations and \nTrends in Optimization, 1(3), 123–231.\n\n## Miscellaneous\n\n* Liang, X., Yang, Y., \u0026 Huang, J. (2018). Evaluation of structural relationships in autoregressive cross-lagged models under longitudinal approximate invariance: A Bayesian analysis. Structural Equation Modeling: A Multidisciplinary Journal, 25(4), 558–572. https://doi.org/10.1080/10705511.2017.1410706\n* Bauer, D. J., Belzak, W. C. M., \u0026 Cole, V. T. (2020). Simplifying the Assessment of Measurement Invariance over Multiple Background Variables: Using Regularized Moderated Nonlinear Factor Analysis to Detect Differential Item Functioning. Structural Equation Modeling: A Multidisciplinary Journal, 27(1), 43–55. https://doi.org/10.1080/10705511.2019.1642754\n\n\n# LICENSE NOTE\n\nTHE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, \nINCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, \nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE \nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, \nWHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN \nCONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. \n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fjhorzek%2Flesssem","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fjhorzek%2Flesssem","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fjhorzek%2Flesssem/lists"}