{"id":18430168,"url":"https://github.com/friendly/genridge","last_synced_at":"2025-10-24T22:16:06.627Z","repository":{"id":56936548,"uuid":"105555707","full_name":"friendly/genridge","owner":"friendly","description":"Generalized Ridge Trace Plots for Ridge Regression","archived":false,"fork":false,"pushed_at":"2024-12-02T15:25:43.000Z","size":19078,"stargazers_count":4,"open_issues_count":0,"forks_count":1,"subscribers_count":2,"default_branch":"master","last_synced_at":"2025-07-14T13:39:51.680Z","etag":null,"topics":["bias-variance","graphics","principal-component-analysis","regression-models","ridge-regression","singular-value-decomposition"],"latest_commit_sha":null,"homepage":"http://friendly.github.io/genridge","language":"HTML","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/friendly.png","metadata":{"files":{"readme":"README.Rmd","changelog":"NEWS.md","contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2017-10-02T16:10:20.000Z","updated_at":"2024-12-02T15:26:28.000Z","dependencies_parsed_at":"2025-06-22T17:51:43.180Z","dependency_job_id":null,"html_url":"https://github.com/friendly/genridge","commit_stats":{"total_commits":78,"total_committers":3,"mean_commits":26.0,"dds":0.4358974358974359,"last_synced_commit":"1792af7d517ac40d04b6275295b995d3e675fc67"},"previous_names":[],"tags_count":5,"template":false,"template_full_name":null,"purl":"pkg:github/friendly/genridge","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/friendly%2Fgenridge","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/friendly%2Fgenridge/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/friendly%2Fgenridge/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/friendly%2Fgenridge/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/friendly","download_url":"https://codeload.github.com/friendly/genridge/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/friendly%2Fgenridge/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":266554160,"owners_count":23947277,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-07-22T02:00:09.085Z","response_time":66,"last_error":null,"robots_txt_status":null,"robots_txt_updated_at":null,"robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["bias-variance","graphics","principal-component-analysis","regression-models","ridge-regression","singular-value-decomposition"],"created_at":"2024-11-06T05:19:47.023Z","updated_at":"2025-10-24T22:16:06.620Z","avatar_url":"https://github.com/friendly.png","language":"HTML","readme":"---\noutput: github_document\n---\n\n\u003c!-- README.md is generated from README.Rmd. Please edit that file --\u003e\n\n```{r setup, include = FALSE}\nknitr::opts_chunk$set(\n  collapse = TRUE,\n  warning = FALSE,\n  comment = \"#\u003e\",\n  fig.path = \"man/figures/README-\",\n  fig.height = 5,\n  fig.width = 5\n#  out.width = \"100%\"\n)\n\noptions(digits = 4)\nlibrary(genridge)\n\n```\n\n\u003c!-- badges: start --\u003e\n[![DOI](https://zenodo.org/badge/105555707.svg)](https://zenodo.org/badge/latestdoi/105555707)\n[![CRAN_Status_Badge](http://www.r-pkg.org/badges/version/genridge)](https://cran.r-project.org/package=genridge)\n[![R-universe](https://friendly.r-universe.dev/badges/genridge)](https://friendly.r-universe.dev)\n[![downloads](http://cranlogs.r-pkg.org/badges/grand-total/genridge)](https://cran.r-project.org/package=genridge)\n[![pkgdown](https://img.shields.io/badge/pkgdown%20site-blue)](https://friendly.github.io/genridge)\n\n\u003c!-- badges: end --\u003e\n\n# genridge \u003cimg src=\"man/figures/logo.png\" style=\"float:right; height:200px;\" /\u003e\n\n## Generalized Ridge Trace Plots for Ridge Regression\n\n\u003c!-- Version 0.7.1 --\u003e\nVersion `r getNamespaceVersion(\"genridge\")`\n\n### What is ridge regression?\n\n\nConsider the standard linear model,\n$\\mathbf{y} = \\mathbf{X} \\; \\mathbf{\\beta} + \\mathbf{\\epsilon}$\nfor $p$ predictors in a multiple regression.\nIn this context,\nhigh multiple correlations among the predictors lead to well-known problems of collinearity\nunder ordinary least squares (OLS) estimation, which result in unstable estimates of the\nparameters in β: standard errors are inflated and estimated coefficients tend to be too large\nin absolute value on average.\n\nRidge regression is an instance of a class of techniques designed to obtain more favorable\npredictions at the expense of some increase in bias, compared to ordinary least squares (OLS)\nestimation.\nAn essential idea behind these methods is that the OLS estimates are constrained in\nsome way, shrinking them, on average, toward zero, to satisfy increased predictive accuracy.\n\nThe OLS estimates, which minimize the sum of squared residuals $RSS = \\Sigma \\mathbf{\\epsilon}^2$  are given by:\n$$\n\\widehat{\\mathbf{\\beta}}^{\\mathrm{OLS}} = (\\mathbf{X}^\\top \\mathbf{X})^{-1} \\mathbf{X}^\\top \\mathbf{y} \\; ,\n$$\nwith $\\widehat{\\text{Var}} (\\widehat{\\mathbf{\\beta}}^{\\mathrm{OLS}}) = \\widehat{\\sigma}^2 (\\mathbf{X}^\\top \\mathbf{X})^{-1}$.\n\nRidge regression replaces the standard residual sum of squares criterion with a penalized\nform,\n\n$$\n\\mathrm{RSS}(\\lambda) = (\\mathbf{y}-\\mathbf{X} \\mathbf{\\beta})^\\top  (\\mathbf{y}-\\mathbf{X} \\mathbf{\\beta}) + \\lambda \\mathbf{\\beta}^\\top \\mathbf{\\beta} \\quad\\quad (\\lambda \\ge 0) \\: ,\n$$\n\nwhose solution is easily seen to be:\n\n$$\n\\widehat{\\mathbf{\\beta}}^{\\mathrm{RR}}_k  = (\\mathbf{X}^\\top \\mathbf{X} + \\lambda \\mathbf{I})^{-1} \\mathbf{X}^\\top \\mathbf{y}  \n$$\n\nwhere $\\lambda$ is the _shrinkage factor_ or _tuning constant_, penalizing larger coefficients. \nShrinkage can also be expressed as the equivalent degrees of freedom, the trace of \nthe analog of the \"hat\" matrix, $\\text{tr}[(\\mathbf{X}^\\top \\mathbf{X} + \\lambda \\mathbf{I})^{-1} \\mathbf{X}^\\top]$.\nIn general,\n\n* The bias increases as λ increases,\n* The sampling variance decreases as λ increases.\n\nOne goal of the `genridge` package is to provide visualization methods for these models to\nhelp understand the tradeoff between bias and variance and choice of a shrinkage value $\\lambda$.\n\n\n### Package overview\n\nThe `genridge` package introduces generalizations of the standard univariate\nridge trace plot used in ridge regression and related methods (Friendly, 2011, 2013).  These graphical methods\nshow both bias (actually, shrinkage) and precision, by plotting the covariance ellipsoids of the estimated\ncoefficients, rather than just the estimates themselves.  2D and 3D plotting methods are provided,\nboth in the space of the predictor variables and in the transformed space of the PCA/SVD of the\npredictors.  \n\n### Details\n\nThis package provides computational support for the graphical methods described in Friendly (2013). Ridge regression models may be fit using the function `ridge`, which incorporates features of `MASS::lm.ridge()` and `ElemStatLearn::simple.ridge()`. In particular, the shrinkage factors in ridge regression may be specified either in terms of the constant ($\\lambda$) added to the diagonal of $X^\\top X$ matrix, or the equivalent number of degrees of freedom.\n\nThe following computational functions are provided:\n\n* `ridge()` Calculates ridge regression estimates; returns an object of class `\"ridge\"`\n* `pca.ridge()` Transform coefficients and covariance matrices to PCA/SVD space; returns an object of class `c(\"pcaridge\", \"ridge\")`\n* `vif.ridge()` Calculates VIFs for \"ridge\" objects\n* `precision()` Calculates measures of precision and shrinkage\n\nMore importantly, the `ridge` functions also calculate and returns the associated covariance matrices of each of the ridge estimates, allowing precision to be studied and displayed graphically.\n\nThis provides the support for the main plotting functions in the package:\n\n* `traceplot()`: Traditional univariate ridge trace plots\n* `plot.ridge()`: Bivariate ridge trace plots, showing the covariance ellipse of the estimated coefficients.\n* `pairs.ridge()`: All pairwise bivariate ridge trace plots\n* `plot3d.ridge()`: 3D ridge trace plots with ellipsoids\n* `plot.precision()`: Plots a measure of precsion vs. one of shrinkage\n* `plot.vif.ridge()`: Plots variance inflation factors\n\nIn addition, the `pca()` method for `\"ridge\"` objects transforms the coefficients and covariance matrices of a ridge object from predictor space to the equivalent, but more interesting space of the PCA of $X^\\top X$ or the SVD of $X$. The main plotting functions also work for these objects, of class `c(\"ridge\", \"pcaridge\")`.\n\n* `biplot.pcaridge()`: Adds variable vectors to the bivariate plots of coefficients in PCA space\n\n\nFinally, the functions `precision()` and `vif.ridge()` provide other useful measures and plots.\n\n## Installation\n\n+-------------------+-----------------------------------------------------------------------------+\n| CRAN version      | `install.packages(\"genridge\")`                                              |\n+-------------------+-----------------------------------------------------------------------------+\n| R-universe        | `install.packages(\"genridge\", repos = c('https://friendly.r-universe.dev')` |\n+-------------------+-----------------------------------------------------------------------------+\n| Development       | `remotes::install_github(\"friendly/genridge\")`                              |\n| version           |                                                                             |\n+-------------------+-----------------------------------------------------------------------------+\n\n## Examples\n\n\n\nThe classic example for ridge regression is Longley's (1967) data, consisting of 7 economic variables, observed yearly from 1947 to 1962 (n=16), in the data frame `datasets::longley`.\nThe goal is to predict `Employed` from `GNP`, `Unemployed`, `Armed.Forces`, `Population`, `Year`,\n`GNP.deflator`.\n\nThese data, constructed to illustrate numerical problems in least squares software at the time, are (purposely) perverse, in that: \n\n* each variable is a time series so that there is clearly a lack of independence among predictors.\n* worse, there is also some _structural collinearity_ among the variables `GNP`, `Year`, `GNP.deflator`, `Population`, e.g., `GNP.deflator` is a multiplicative factor to account for inflation.\n\n```{r longley1}\ndata(longley)\nstr(longley)\n```\n\nShrinkage values, can be specified using either $\\lambda$ (where $\\lambda = 0$ corresponds to OLS),\nor equivalent effective degrees of freedom.\nThis quantifies the tradeoff between bias and variance for predictive modeling, \nwhere OLS has low bias, but can have large predictive variance.\n\n`ridge()` returns a matrix containing the coefficients for each predictor for each shrinkage value\nand other quantities.\n```{r longley2}\nlambda \u003c- c(0, 0.005, 0.01, 0.02, 0.04, 0.08)\nlridge \u003c- ridge(Employed ~ GNP + Unemployed + Armed.Forces + Population + Year + GNP.deflator, \n\t\tdata=longley, lambda=lambda)\nlridge\n```\n\n\n\n### Variance Inflation Factors\n\nThe effects of collinearity can be measured by a variance inflation factor (VIF), the ratio\nof the sampling variances of the coefficients, relative to what they would be if all\npredictors were uncorrelated, given by \n$$\n\\text{VIF}(\\beta_i) = \\frac{1}{1 - R^2_{i | \\text{others}}} \\; ,\n$$\nwhere \"others\" represents all other predictors except $X_i$.\n\n`vif()` for a `\"ridge\"` object calculates variance inflation factors for all\nvalues of the ridge constant. You can see that for OLS, nearly all VIF values\nare dangerously high. With a ridge factor of 0.04 or greater, variance inflation has been\nconsiderably reduced for a few of the predictors.\n\n```{r vif}\nvridge \u003c- vif(lridge)\nvridge\n```\n\n`vif()` returns a `\"vif.ridge\"` object for which there is a plot method:\n\n```{r plot-vif}\nclr \u003c-  c(\"black\", \"red\", \"darkgreen\",\"blue\", \"cyan4\", \"magenta\")\npch \u003c- c(15:18, 7, 9)\nplot(vridge, X = \"df\", Y=\"sqrt\",\n     col=clr, pch=pch, cex = 1.2,\n     xlim = c(4, 6.5))\n```\n\n\n### Univariate trace plots\n\nA standard, univariate, `traceplot()` simply plots the estimated coefficients for each predictor\nagainst the shrinkage factor, $\\lambda$.\n\n```{r longley-tp1}\n#| fig.width = 7,\n#| echo = -1,\n#| fig.cap = \"Univariate ridge trace plot for the coefficients of predictors of Employment in Longley’s data via ridge regression, with ridge constants k = 0, 0.005, 0.01, 0.02, 0.04, 0.08.\"\npar(mar=c(4, 4, 1, 1)+ 0.1)\ntraceplot(lridge, xlim = c(-0.02, 0.08))\n```\n\nThe dotted lines show choices for the ridge\nconstant by two commonly used criteria to balance bias against precision\ndue to **HKB**: Hoerl, Kennard, and Baldwin (1975) and **LW**: Lawless and Wang (1976). \nThese values (along with a generalized cross-validation value GCV) are also stored\nin the `\"ridge\"` object,\n\n```{r HKB}\nc(HKB=lridge$kHKB, LW=lridge$kLW, GCV=lridge$kGCV)\n# these are also stored in 'criteria' for use with plotting methods\ncriteria \u003c- lridge$criteria\n```\n\nThese values seem rather small, but note that the coefficients for `Year` and `GNP` are \nshrunk considerably.\n\n### Alternative plot\nIt is sometimes easier to interpret the plot when coefficients are plotted against the equivalent\ndegrees of freedom, where $\\lambda = 0$ corresponds to 6 degrees of freedom in the parameter\nspace of six predictors. Note that the values of $\\lambda$ chosen here were approximately \nchosen on a log scale. Using the scaling `X=\"df\"` here makes the points more nearly equally\nspaced.\n\n```{r longley-tp2}\n#| fig.width = 7,\n#| echo = -1,\n#| fig.cap = \"Univariate ridge trace plot of coefficients against effective degrees of freedom.\"\npar(mar=c(4, 4, 1, 1)+ 0.1)\ntraceplot(lridge, X=\"df\", xlim = c(4, 6.5))\n```\n\n**But wait: This is the wrong plot!** These plots show the trends in increased bias associated with larger $\\lambda$, but they do **not**\nshow the accompanying decrease in variance (increase in precision).\nFor that, we need to consider the variances and covariances of the estimated coefficients.\nThe univariate trace plot is the wrong graphic form for what is essentially a _multivariate_ problem,\nwhere we would like to visualize how _both_ coefficients and their variances change with\n$\\lambda$.\n\n\n\n### Bivariate trace plots\n\nThe bivariate analog of the trace plot suggested by Friendly (2013) plots **bivariate\nconfidence ellipses** for pairs of coefficients. Their centers, $(\\widehat{\\beta}_i, \\widehat{\\beta}_j)$ show the estimated coefficients, and their size and shape\nindicate sampling variance, $\\widehat{\\text{Var}} (\\mathbf{\\widehat{\\beta}}_{ij})$.\nHere, we plot those for `GNP` against\nfour of the other predictors.\n\n```{r longley-plot-ridge}\n#| out.width = \"80%\",\n#| fig.show = \"hold\",\n#| fig.cap = \"Bivariate ridge trace plots for the coefficients of four predictors against the coefficient for GNP in Longley’s data, with λ = 0, 0.005, 0.01, 0.02, 0.04, 0.08. In most cases, the coefficients are driven toward zero, but the bivariate plot also makes clear the reduction in variance, as well as the bivariate path of shrinkage.\"\nop \u003c- par(mfrow=c(2,2), mar=c(4, 4, 1, 1)+ 0.1)\nclr \u003c-  c(\"black\", \"red\", \"darkgreen\",\"blue\", \"cyan4\", \"magenta\")\npch \u003c- c(15:18, 7, 9)\nlambdaf \u003c- c(expression(~widehat(beta)^OLS), \".005\", \".01\", \".02\", \".04\", \".08\")\n\nfor (i in 2:5) {\n\tplot(lridge, variables=c(1,i), \n\t     radius=0.5, cex.lab=1.5, col=clr, \n\t     labels=NULL, fill=TRUE, fill.alpha=0.2)\n\ttext(lridge$coef[1,1], lridge$coef[1,i], \n\t     expression(~widehat(beta)^OLS), cex=1.5, pos=4, offset=.1)\n\ttext(lridge$coef[-1,c(1,i)], lambdaf[-1], pos=3, cex=1.3)\n}\npar(op)\n```\n\nAs can be seen, the coefficients for each pair of predictors trace a path generally in toward\nthe origin $(0, 0)$, and the covariance ellipses get smaller, indicating increased precision.\n\nThe `pairs()` method for `\"ridge\"` objects shows all pairwise views in scatterplot matrix form.\n\n```{r longley-pairs}\n#| out.width = \"90%\",\n#| fig.cap = \"Scatterplot matrix of bivariate ridge trace plots\"\npairs(lridge, radius=0.5, diag.cex = 2, \n      fill = TRUE, fill.alpha = 0.1)\n```\n\nSee Friendly et-al. (2013) for other examples of how elliptical thinking can lead to insights\nin statistical problems.\n\n### Visualizing the bias-variance tradeoff\n\nThe function `precision()` calculates a number of measures of the effect of shrinkage of the\ncoefficients on the estimated sampling variance. Larger shrinkage $\\lambda$ should lead\nto smaller $\\widehat{\\text{Var}} (\\mathbf{\\widehat{\\beta}})$, indicating increased precision.\nSee: `help(precision)` for details.\n\n```{r precision}\nprecision(lridge)\n```\n\n`norm.beta` $= \\lVert\\mathbf{\\beta}\\rVert / \\max{\\lVert\\mathbf{\\beta}\\rVert}$ is a measure of shrinkage, and `det`\n$= \\log{| \\text{Var}(\\mathbf{\\beta}) |}$,\nis a measure of variance of the coefficients (inverse of precision). Plotting these against\neach other gives a direct view of the tradeoff between bias and precision.\n\n```{r precision-plot}\n#| fig.show = \"hold\",\n#| fig.cap = \"Plot of log(Variance) vs. shrinkage to show the tradeoff between bias and variance.\"\npridge \u003c- precision(lridge)\nop \u003c- par(mar=c(4, 4, 1, 1) + 0.2)\nlibrary(splines)\nwith(pridge, {\n\tplot(norm.beta, det, type=\"b\", \n\tcex.lab=1.25, pch=16, cex=1.5, col=clr, lwd=2,\n  xlab='shrinkage: ||b|| / max(||b||)',\n\tylab='variance: log |Var(b)|')\n\ttext(norm.beta, det, lambdaf, cex=1.25, pos=c(rep(2,length(lambda)-1),4), xpd = TRUE)\n\ttext(min(norm.beta), max(det), \"log |Variance| vs. Shrinkage\", cex=1.5, pos=4)\n\t})\nmod \u003c- lm(cbind(det, norm.beta) ~ bs(lambda, df=5), data=pridge)\nx \u003c- data.frame(lambda=c(lridge$kHKB, lridge$kLW))\nfit \u003c- predict(mod, x)\npoints(fit[,2:1], pch=15, col=gray(.50), cex=1.5)\ntext(fit[,2:1], c(\"HKB\", \"LW\"), pos=3, cex=1.5, col=gray(.50))\npar(op)\n```\n\nThese plots are now provided in the `plot()` method for (class `\"precision\"`) objects returned by `precision()`.\nThis plots the measure `norm.beta` on the horizontal axis vs. any of \nthe variance measures `det`, `trace`, or `max.eig`, and labels the points with either `k` or `df`.\nSee `help(\"precision\")` for the definitions of these variance measures.\n\nA plot similar to that above can be produced as shown below, but here labeling the points with\neffective degrees of freedom. The shape of the curve is quite similar.\n\n```{r precision-plot2,echo=-1}\n#| fig.cap = \"Plot of det(Variance) vs. shrinkage (`norm.beta`) to show the tradeoff between bias and variance using the `plot()` method for `'precision'` objects. Points are labeled with the effective degrees of freedom.\"\nop \u003c- par(mar=c(4, 4, 1, 1) + 0.2)\nplot(pridge, labels = \"df\", label.prefix=\"df:\", criteria = criteria)\n```\n\n\n\n## Low-rank views\n\nJust as principal components analysis gives low-dimensional views of a data set, PCA can\nbe useful to understand ridge regression.\n\nThe  `pca` method transforms a `\"ridge\"` object\nfrom parameter space, where the estimated coefficients are\n$\\beta_k$ with covariance matrices $\\Sigma_k$, to the\nprincipal component space defined by the right singular vectors, $V$,\nof the singular value decomposition of the scaled predictor matrix, $X$.\n\n```{r pca-traceplot}\n#| echo = -1\npar(mar=c(4, 4, 1, 1)+ 0.1)\nplridge \u003c- pca(lridge)\nplridge\ntraceplot(plridge)\n```\n\nWhat is perhaps surprising is that the coefficients for the first 4 components are not shrunk at all.\nRather, the effect of shrinkage is seen only on the _last two dimensions_.\nThese are the\ndirections that contribute most to collinearity, for which other visualization methods have\nbeen proposed (Friendly \u0026 Kwan 2009).\n\nThe `pairs()` plot illustrates the _joint_ effects: the principal components of\n$\\mathbf{X}$ are uncorrelated, so the ellipses are all aligned with the coordinate axes\nand the ellipses largely coincide for dimensions 1 to 4:\n```{r pca-pairs}\n#| out.width = \"80%\"\npairs(plridge)\n```\n\nIf we focus on the plot of dimensions `5:6`, we can see where all the shrinkage action\nis in this representation. Generally, the predictors that are related to the smallest\ndimension (6) are shrunk quickly at first.\n```{r pca-dim56}\n#| echo = -1\npar(mar=c(4, 4, 1, 1)+ 0.1)\nplot(plridge, variables=5:6, fill = TRUE, fill.alpha=0.2)\ntext(plridge$coef[, 5:6], \n\t   label = lambdaf, \n     cex=1.5, pos=4, offset=.1)\n```\n\n### Biplot view\n\nFinally, we can project the _predictor variables_ into the PCA space of the _smallest dimensions_,\nwhere the shrinkage action mostly occurs to see how the predictor variables relate to these dimensions.\n\n`biplot.pcaridge()` supplements the standard display of the covariance ellipsoids for a ridge regression problem in PCA/SVD space with labeled arrows showing the contributions of the original variables to the dimensions plotted. The length of the arrows reflects proportion of variance\nthat each predictors shares with the components.\n\nThe biplot view showing the dimensions corresponding to the two smallest singular values is particularly useful for understanding how the predictors contribute to shrinkage in ridge regression. Here, `Year` and `Population` largely contribute to `dim 5`; a contrast\nbetween (`Year`, `Population`) and `GNP` contributes to `dim 6`.\n\n```{r biplot}\n#| fig.show=\"hold\",\n#| fig.cap: \"Biplot view of the ridge trace plot for the smallest two dimensions, where the effects of shrinkage are most apparent.\"\nop \u003c- par(mar=c(4, 4, 1, 1) + 0.2)\nbiplot(plridge, radius=0.5, \n       ref=FALSE, asp=1, \n       var.cex=1.15, cex.lab=1.3, col=clr,\n       fill=TRUE, fill.alpha=0.2, prefix=\"Dimension \")\ntext(plridge$coef[,5:6], lambdaf, pos=2, cex=1.3)\npar(op)\n```\n\n## Other examples\n\nThe genridge package contains four data sets, each with its own examples;\ne.g., you can try `example(Acetylene)`.\n\n```{r datasets}\nvcdExtra::datasets(package=\"genridge\")\n```\n\n\n\n## References\n\nFriendly, M. (2011). Generalized Ridge Trace Plots: Visualizing Bias _and_ Precision with the `genridge` R package. SCS Seminar, Jan., 2011. Slides:\n[gentalk.pdf](http://euclid.psych.yorku.ca/datavis/papers/gentalk.pdf);\n[gentalk-2x2.pdf](http://euclid.psych.yorku.ca/datavis/papers/gentalk-2x2.pdf)\n\nFriendly, M. (2013).\nThe Generalized Ridge Trace Plot: Visualizing Bias _and_ Precision.\n_Journal of Computational and Graphical Statistics_, **22**(1), 50-68,\n[DOI link](http://dx.doi.org/10.1080/10618600.2012.681237), Online:\n[genridge-jcgs.pdf](https://www.datavis.ca/papers/genridge-jcgs.pdf),\nSupp. materials: [genridge-supp.zip](http://datavis.ca/papers/genridge-supp.zip)\n\nFriendly, M., and Kwan, E. (2009), Where’s Waldo: Visualizing Collinearity Diagnostics, \n_The American Statistician_, **63**(1), 56–65,\n[DOI link](https://doi.org/10.1198/tast.2009.0012),\nOnline: [viscollin-tast.pdf](http://datavis.ca/papers/viscollin-tast.pdf),\nSupp. materials: [http://datavis.ca/papers/viscollin/](http://datavis.ca/papers/viscollin/).\n\nFriendly, M., Monette, G., \u0026 Fox, J. (2013). Elliptical Insights: Understanding Statistical Methods Through Elliptical Geometry. _Statistical Science_, **28**(1), 1–39. https://doi.org/10.1214/12-STS402\n\nGolub G.H., Heath M., Wahba G. (1979) Generalized cross-validation as a method for choosing a good ridge parameter. \n_Technometrics_, **21**:215–223. https://doi.org/10.2307/1268518.\n\nHoerl, A. E., Kennard, R. W., and Baldwin, K. F. (1975), \nRidge Regression: Some Simulations, _Communications in Statistics_, **4**, 105–123.\n\nLawless, J. F., and Wang, P. (1976), A Simulation Study of Ridge and Other Regression Estimators, _Communications in Statistics_, **5**, 307–323.\n\nLongley, J. W.  (1967) An appraisal of least-squares programs from the point of view of the user. \n_Journal of the American Statistical Association_,\n**62**, 819–841.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ffriendly%2Fgenridge","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ffriendly%2Fgenridge","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ffriendly%2Fgenridge/lists"}