ML.CLASSIFICATION.LOGISTIC¶
Creates a Logistic Regression object.
Syntax¶
Arguments¶
| Name | Type | Default | Description |
|---|---|---|---|
| C | float | 1.0 | Inverse of regularization strength; must be a positive float. Smaller values specify stronger regularization. |
| penalty | str | "l2" | Norm used in the penalization. Common values: 'l1', 'l2', 'elasticnet', or 'none'. |
| fit_intercept | bool | TRUE | Specifies if a constant (bias or intercept) should be added to the decision function. |
| max_iter | int | 100 | Maximum number of iterations taken for the solvers to converge. |
| tol | float | 0.0001 | Tolerance for stopping criteria. |
Returns¶
A Logistic Regression model handle, ready to pass into ML.FIT.
When to use¶
Reach for logistic regression when you need a fast, interpretable linear baseline classifier — especially as the first model on a fresh dataset before trying anything heavier. It works well when the relationship between your features and the class label is roughly linear, when you have many more rows than columns, and when you want to read off feature coefficients to explain the model.
Compared to the alternatives in this namespace:
- Use logistic when you want speed, interpretability, and a strong baseline.
- Use
ML.CLASSIFICATION.SVMwhen classes are harder to separate and you need a non-linear kernel. - Use
ML.CLASSIFICATION.RANDOM_FOREST_CLFwhen feature interactions matter or your data has a mix of numeric and categorical features.
Examples¶
Build a logistic model and fit it against features in A2:E100 and the target
label in F2:F100, then predict the labels for ten new rows in A101:E110:
Apply L1 regularization to drive small coefficients to zero — useful for selecting the most informative features:
Tighten the regularization (smaller C) when you have many features and only
a few rows:
Remarks¶
Cis the inverse of regularization strength: smallerC= stronger regularization. Defaults to1.0.penalty="l1"produces sparse coefficients (effectively a feature selector);"l2"(default) shrinks all coefficients smoothly.- For multi-class targets the function uses one-vs-rest internally — no extra setup needed in the spreadsheet.
- Always scale your features (e.g. with
ML.PREPROCESSING.STANDARD_SCALER) before fitting; logistic regression is sensitive to feature magnitude.