Skip to content

ML.REGRESSION.LASSO

Creates a Lasso Regression object.

Syntax

ML.REGRESSION.LASSO(alpha, fit_intercept)

Arguments

Name Type Default Description
alpha float 1.0 Regularization strength; must be a positive float. Larger values specify stronger regularization.
fit_intercept Any TRUE Whether to calculate the intercept for this model. If set to False, no intercept will be used in calculations (i.e., data is expected to be centered).

Returns

A Lasso Regression model handle, ready to pass into ML.FIT.

When to use

Reach for Lasso when you suspect only a handful of your features actually matter and you want the model to tell you which ones. Lasso applies an L1 penalty that drives unimportant feature coefficients all the way to zero — giving you a sparse model that doubles as a feature selector.

Compared to the alternatives in this namespace:

  • Use ML.REGRESSION.LINEAR when you want every feature in the fit and have no overfitting concerns.
  • Use ML.REGRESSION.RIDGE when features are correlated and you want all of them in the model with shrunken coefficients.
  • Use lasso when you want a small, interpretable model with built-in feature selection.
  • Use ML.REGRESSION.ELASTIC_NET when you want a blend of Lasso's sparsity and Ridge's stability under correlated features.

Examples

Fit a Lasso model on features in A2:E100 and the numeric target in F2:F100, then predict ten new rows in A101:E110:

=ML.REGRESSION.LASSO()
=ML.FIT(H1, A2:E100, F2:F100)
=ML.PREDICT(H2, A101:E110)

Increase alpha to push more feature coefficients to exactly zero:

=ML.REGRESSION.LASSO(0.5)

Score the fitted model on a held-out test set in A101:E120 / F101:F120:

=ML.EVAL.SCORE(H2, A101:E120, F101:F120)

Remarks

  • alpha is the regularization strength. Larger alpha = more zeros in the coefficient vector. Defaults to 1.0.
  • Scale your features first (e.g. with ML.PREPROCESSING.STANDARD_SCALER) — Lasso applies the same penalty to every coefficient regardless of scale.
  • When two features are highly correlated Lasso will tend to pick one and zero the other almost arbitrarily. If that bothers you, switch to ML.REGRESSION.ELASTIC_NET.

See also