Skip to content

ML.REGRESSION.ELASTIC_NET

Creates a Elastic Net Regression object.

Syntax

ML.REGRESSION.ELASTIC_NET(alpha, l1_ratio, fit_intercept)

Arguments

Name Type Default Description
alpha float 1.0 Regularization strength; must be a positive float. Larger values specify stronger regularization.
l1_ratio float 0.5 The ElasticNet mixing parameter, with 0 <= l1_ratio <= 1. For l1_ratio = 0 the penalty is an L2 penalty. For l1_ratio = 1 it is an L1 penalty.
fit_intercept Any TRUE Whether to calculate the intercept for this model. If set to False, no intercept will be used in calculations (i.e., data is expected to be centered).

Returns

An Elastic Net regression model handle, ready to pass into ML.FIT.

When to use

Reach for Elastic Net when you want Lasso's automatic feature selection but also want the stability of Ridge when features are correlated. Elastic Net mixes the L1 and L2 penalties through the l1_ratio knob — letting you dial between pure Ridge and pure Lasso to fit your data.

Compared to the alternatives in this namespace:

  • Use ML.REGRESSION.RIDGE (l1_ratio=0) when correlated features should all stay in the model with shrunken coefficients.
  • Use ML.REGRESSION.LASSO (l1_ratio=1) when you want only the most important features to survive.
  • Use elastic_net when you want both: feature selection plus stable handling of correlated feature groups.

Examples

Fit an Elastic Net model with the default 50/50 mix on features in A2:E100 and target in F2:F100:

=ML.REGRESSION.ELASTIC_NET()
=ML.FIT(H1, A2:E100, F2:F100)
=ML.PREDICT(H2, A101:E110)

Bias toward Lasso (more sparsity) by raising l1_ratio:

=ML.REGRESSION.ELASTIC_NET(1.0, 0.9)

Bias toward Ridge (smoother shrinkage) by lowering l1_ratio:

=ML.REGRESSION.ELASTIC_NET(1.0, 0.1)

Remarks

  • alpha controls overall regularization strength; l1_ratio (0–1) controls the mix between L2 (Ridge) and L1 (Lasso). l1_ratio=0.5 is a balanced starting point.
  • Scale your features first (e.g. with ML.PREPROCESSING.STANDARD_SCALER) — both penalty terms are sensitive to feature scale.
  • When correlated features form natural groups, Elastic Net tends to keep or drop the whole group together — unlike Lasso, which picks one and zeros the rest.

See also