ML.REGRESSION.LASSO¶
Creates a Lasso Regression object.
Syntax¶
Arguments¶
| Name | Type | Default | Description |
|---|---|---|---|
| alpha | float | 1.0 | Regularization strength; must be a positive float. Larger values specify stronger regularization. |
| fit_intercept | Any | TRUE | Whether to calculate the intercept for this model. If set to False, no intercept will be used in calculations (i.e., data is expected to be centered). |
Returns¶
A Lasso Regression model handle, ready to pass into ML.FIT.
When to use¶
Reach for Lasso when you suspect only a handful of your features actually matter and you want the model to tell you which ones. Lasso applies an L1 penalty that drives unimportant feature coefficients all the way to zero — giving you a sparse model that doubles as a feature selector.
Compared to the alternatives in this namespace:
- Use
ML.REGRESSION.LINEARwhen you want every feature in the fit and have no overfitting concerns. - Use
ML.REGRESSION.RIDGEwhen features are correlated and you want all of them in the model with shrunken coefficients. - Use lasso when you want a small, interpretable model with built-in feature selection.
- Use
ML.REGRESSION.ELASTIC_NETwhen you want a blend of Lasso's sparsity and Ridge's stability under correlated features.
Examples¶
Fit a Lasso model on features in A2:E100 and the numeric target in
F2:F100, then predict ten new rows in A101:E110:
Increase alpha to push more feature coefficients to exactly zero:
Score the fitted model on a held-out test set in A101:E120 / F101:F120:
Remarks¶
alphais the regularization strength. Largeralpha= more zeros in the coefficient vector. Defaults to1.0.- Scale your features first (e.g. with
ML.PREPROCESSING.STANDARD_SCALER) — Lasso applies the same penalty to every coefficient regardless of scale. - When two features are highly correlated Lasso will tend to pick one and
zero the other almost arbitrarily. If that bothers you, switch to
ML.REGRESSION.ELASTIC_NET.