ML.REGRESSION.LINEAR¶
Creates a Linear Regression object.
Syntax¶
Arguments¶
| Name | Type | Default | Description |
|---|---|---|---|
| fit_intercept | Any | TRUE | Whether to calculate the intercept for this model. If set to False, no intercept will be used in calculations (i.e., data is expected to be centered). |
Returns¶
A Linear Regression model handle, ready to pass into ML.FIT.
When to use¶
Reach for plain linear regression when you need the simplest possible numeric model and a clear interpretation of how each feature shifts the prediction. It is the right baseline whenever the relationship between your features and the target looks roughly linear, you have no obvious multicollinearity, and you have far more rows than columns.
Compared to the alternatives in this namespace:
- Use linear as the unregularized baseline.
- Use
ML.REGRESSION.RIDGEwhen features are correlated or you have only slightly more rows than columns. - Use
ML.REGRESSION.LASSOwhen you suspect only a handful of features actually matter and want the others zeroed out. - Use
ML.REGRESSION.ELASTIC_NETwhen you want a blend of Ridge and Lasso. - Use
ML.REGRESSION.RANDOM_FOREST_REGwhen the relationship is non-linear or features interact.
Examples¶
Fit an ordinary least-squares model on features in A2:E100 and the numeric
target in F2:F100, then predict ten new rows in A101:E110:
Force the line through the origin (no intercept term) when you know the relationship must be zero at zero:
Remarks¶
- Plain linear regression has no regularization. If your features are
correlated, switch to
ML.REGRESSION.RIDGEto get a more stable fit. - Score a fitted model on held-out data with
ML.EVAL.SCOREor any of the metrics underML.EVAL.REGRESSION.*(e.g.R2_SCORE,MEAN_SQUARED_ERROR). - Scale numeric features (e.g. with
ML.PREPROCESSING.STANDARD_SCALER) only if you plan to compare coefficient magnitudes — pure ordinary least squares is otherwise scale-equivariant.