Bug fixes

`confusion.glmnet`

was sometimes not returning a list because of apply collapsing structure`cv.mrelnet`

and`cv.multnet`

dropping dimensions inappropriately- Fix to
`storePB`

to avoid segfault. Thanks Tomas Kalibera! - Changed the help for
`assess.glmnet`

and cousins to be more helpful! - Changed some logic in
`lambda.interp`

to avoid edge cases (thanks David Keplinger)

Minor fix to correct Depends in the DESCRIPTION to R (>= 3.6.0)

This is a major revision with much added functionality, listed roughly in order of importance. An additional vignette called `relax`

is supplied to describe the usage.

`relax`

argument added to`glmnet`

. This causes the models in the path to be refit without regularization. The resulting object inherits from class`glmnet`

, and has an additional component, itself a glmnet object, which is the relaxed fit.`relax`

argument to`cv.glmnet`

. This allows selection from a mixture of the relaxed fit and the regular fit. The mixture is governed by an argument`gamma`

with a default of 5 values between 0 and 1.

`predict`

,`coef`

and`plot`

methods for`relaxed`

and`cv.relaxed`

objects.`print`

method for`relaxed`

object, and new`print`

methods for`cv.glmnet`

and`cv.relaxed`

objects.- A progress bar is provided via an additional argument
`trace.it=TRUE`

to`glmnet`

and`cv.glmnet`

. This can also be set for the session via`glmnet.control`

.

- Three new functions
`assess.glmnet`

,`roc.glmnet`

and`confusion.glmnet`

for displaying the performance of models. `makeX`

for building the`x`

matrix for input to`glmnet`

. Main functionality is*one-hot-encoding*of factor variables, treatment of`NA`

and creating sparse inputs.`bigGlm`

for fitting the GLMs of`glmnet`

unpenalized.

In addition to these new features, some of the code in `glmnet`

has been tidied up, especially related to CV.

- Fixed a bug in internal function
`coxnet.deviance`

to do with input`pred`

, as well as saturated`loglike`

(missing) and weights - added a
`coxgrad`

function for computing the gradient

- Fixed a bug in coxnet to do with ties between death set and risk set

- Added an option alignment to
`cv.glmnet`

, for cases when wierd things happen

- Further fixes to mortran to get clean fortran; current mortran src is in
`inst/mortran`

- Additional fixes to mortran; current mortran src is in
`inst/mortran`

- Mortran uses double precision, and variables are initialized to avoid
`-Wall`

warnings - cleaned up repeat code in CV by creating a utility function

- Fixed up the mortran so that generic fortran compiler can run without any configure

- Cleaned up some bugs to do with exact prediction
`newoffset`

created problems all over - fixed these

- Added protection with
`exact=TRUE`

calls to`coef`

and`predict`

. See help file for more details

- Two iterations to fix to fix native fortran registration.

- included native registration of fortran

- constant
`y`

blows up`elnet`

; error trap included - fixed
`lambda.interp`

which was returning`NaN`

under degenerate circumstances.

- added some code to extract time and status gracefully from a
`Surv`

object

- changed the usage of
`predict`

and`coef`

with`exact=TRUE`

. The user is strongly encouraged to supply the original`x`

and`y`

values, as well as any other data such as weights that were used in the original fit.

- Major upgrade to CV; let each model use its own lambdas, then predict at original set.
- fixed some minor bugs

- fixed subsetting bug in
`lognet`

when some weights are zero and`x`

is sparse

- fixed bug in multivariate response model (uninitialized variable), leading to valgrind issues
- fixed issue with multinomial response matrix and zeros
- Added a link to a glmnet vignette

- fixed bug in
`predict.glmnet`

,`predict.multnet`

and`predict.coxnet`

, when`s=`

argument is used with a vector of values. It was not doing the matrix multiply correctly - changed documentation of glmnet to explain logistic response matrix

- added parallel capabilities, and fixed some minor bugs

- added
`intercept`

option

- added upper and lower bounds for coefficients
- added
`glmnet.control`

for setting systems parameters - fixed serious bug in
`coxnet`

- added
`exact=TRUE`

option for prediction and coef functions

- Major new release
- added
`mgaussian`

family for multivariate response - added
`grouped`

option for multinomial family

- nasty bug fixed in fortran - removed reference to dble
- check class of
`newx`

and make`dgCmatrix`

if sparse

`lognet`

added a classnames component to the object`predict.lognet(type="class")`

now returns a character vector/matrix

`predict.glmnet`

: fixed bug with`type="nonzero"`

`glmnet`

: Now x can inherit from`sparseMatrix`

rather than the very specific`dgCMatrix`

, and this will trigger sparse mode for glmnet

`glmnet.Rd`

(`lambda.min`

) : changed value to 0.01 if`nobs < nvars`

, (`lambda`

) added warnings to avoid single value, (`lambda.min`

): renamed it`lambda.min.ratio`

`glmnet`

(`lambda.min`

) : changed value to 0.01 if`nobs < nvars`

(`HessianExact`

) : changed the sense (it was wrong), (`lambda.min`

): renamed it`lambda.min.ratio`

. This allows it to be called`lambda.min`

in a call though`predict.cv.glmnet`

(new function) : makes predictions directly from the saved`glmnet`

object on the cv object`coef.cv.glmnet`

(new function) : as above`predict.cv.glmnet.Rd`

: help functions for the above`cv.glmnet`

: insert`drop(y)`

to avoid 1 column matrices; now include a`glmnet.fit`

object for later predictions`nonzeroCoef`

: added a special case for a single variable in`x`

; it was dying on this`deviance.glmnet`

: included`deviance.glmnet.Rd`

: included

- Note that this starts from version
`glmnet_1.4`

.