Implementation:Scikit learn Scikit learn LogisticRegression Init
| Field | Value |
|---|---|
| source | scikit-learn|https://github.com/scikit-learn/scikit-learn |
| domains | Data_Science, Machine_Learning |
| last_updated | 2026-02-08 15:00 GMT |
Overview
Concrete tool for instantiating a logistic regression classifier provided by scikit-learn.
Description
The LogisticRegression.__init__ method configures a logistic regression estimator with all hyperparameters required for subsequent training. Logistic regression is a linear model for classification that models the posterior probability of each class using the logistic (sigmoid) function. The constructor accepts parameters controlling regularization, optimization, convergence, and parallelism without performing any computation.
Usage
- Setting up a baseline linear classifier for binary or multiclass classification tasks.
- Configuring regularization strength (
C) and type (l1_ratio) to control model complexity. - Selecting an optimization solver (
solver) appropriate for the dataset size and regularization type. - Preparing an estimator instance for use in pipelines, grid search, or standalone training.
Code Reference
Source Location
sklearn/linear_model/_logistic.py, method LogisticRegression.__init__
Signature
def __init__(
self,
penalty="deprecated",
*,
C=1.0,
l1_ratio=0.0,
dual=False,
tol=1e-4,
fit_intercept=True,
intercept_scaling=1,
class_weight=None,
random_state=None,
solver="lbfgs",
max_iter=100,
verbose=0,
warm_start=False,
n_jobs=None,
):
Import
from sklearn.linear_model import LogisticRegression
I/O Contract
Inputs (Constructor Parameters)
| Parameter | Type | Default | Description |
|---|---|---|---|
penalty |
str | "deprecated" |
Regularization norm. Deprecated in v1.8; use l1_ratio and C instead.
|
C |
float | 1.0 |
Inverse of regularization strength. Smaller values specify stronger regularization. Set to np.inf to disable regularization.
|
l1_ratio |
float | 0.0 |
The Elastic-Net mixing parameter. 0.0 is equivalent to L2, 1.0 is L1, and values in between produce Elastic-Net regularization.
|
dual |
bool | False |
Dual formulation (only for L2 penalty with liblinear solver). Prefer dual=False when n_samples > n_features.
|
tol |
float | 1e-4 |
Tolerance for the stopping criterion of the solver. |
fit_intercept |
bool | True |
Whether to add a constant (bias) term to the decision function. |
intercept_scaling |
float | 1 |
Scaling factor for the intercept term when using the liblinear solver. |
class_weight |
dict or "balanced" or None | None |
Weights associated with classes. If "balanced", weights are inversely proportional to class frequencies.
|
random_state |
int, RandomState, or None | None |
Seed for reproducibility. Used when solver is "sag", "saga", or "liblinear".
|
solver |
str | "lbfgs" |
Optimization algorithm. Choices: "lbfgs", "liblinear", "newton-cg", "newton-cholesky", "sag", "saga".
|
max_iter |
int | 100 |
Maximum number of iterations for the solver to converge. |
verbose |
int | 0 |
Verbosity level for solver output. |
warm_start |
bool | False |
If True, reuse the solution of the previous call to fit as initialization.
|
n_jobs |
int or None | None |
Deprecated since v1.8. Previously controlled parallel jobs for OvR multiclass fitting. |
Outputs
| Return | Type | Description |
|---|---|---|
| instance | LogisticRegression |
A configured (unfitted) estimator instance with all hyperparameters stored as attributes. |
Usage Examples
Default instantiation:
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression()
print(clf.get_params())
# {'C': 1.0, 'class_weight': None, 'dual': False, 'fit_intercept': True, ...}
Custom hyperparameters:
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression(
C=0.5,
l1_ratio=0.0,
solver="lbfgs",
max_iter=200,
random_state=42,
)
Using L1 regularization with the saga solver:
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression(
C=1.0,
l1_ratio=1.0,
solver="saga",
max_iter=500,
)