Skip to content

ML.REGRESSION.LINEAR

Creates a Linear Regression object.

Syntax

ML.REGRESSION.LINEAR(fit_intercept)

Arguments

Name Type Default Description
fit_intercept Any TRUE Whether to calculate the intercept for this model. If set to False, no intercept will be used in calculations (i.e., data is expected to be centered).

Returns

A Linear Regression model handle, ready to pass into ML.FIT.

When to use

Reach for plain linear regression when you need the simplest possible numeric model and a clear interpretation of how each feature shifts the prediction. It is the right baseline whenever the relationship between your features and the target looks roughly linear, you have no obvious multicollinearity, and you have far more rows than columns.

Compared to the alternatives in this namespace:

  • Use linear as the unregularized baseline.
  • Use ML.REGRESSION.RIDGE when features are correlated or you have only slightly more rows than columns.
  • Use ML.REGRESSION.LASSO when you suspect only a handful of features actually matter and want the others zeroed out.
  • Use ML.REGRESSION.ELASTIC_NET when you want a blend of Ridge and Lasso.
  • Use ML.REGRESSION.RANDOM_FOREST_REG when the relationship is non-linear or features interact.

Examples

Fit an ordinary least-squares model on features in A2:E100 and the numeric target in F2:F100, then predict ten new rows in A101:E110:

=ML.REGRESSION.LINEAR()
=ML.FIT(H1, A2:E100, F2:F100)
=ML.PREDICT(H2, A101:E110)

Force the line through the origin (no intercept term) when you know the relationship must be zero at zero:

=ML.REGRESSION.LINEAR(FALSE)

Remarks

  • Plain linear regression has no regularization. If your features are correlated, switch to ML.REGRESSION.RIDGE to get a more stable fit.
  • Score a fitted model on held-out data with ML.EVAL.SCORE or any of the metrics under ML.EVAL.REGRESSION.* (e.g. R2_SCORE, MEAN_SQUARED_ERROR).
  • Scale numeric features (e.g. with ML.PREPROCESSING.STANDARD_SCALER) only if you plan to compare coefficient magnitudes — pure ordinary least squares is otherwise scale-equivariant.

See also