Issue
I'm working on an binary classification prediction and using a Logistic Regression.
I know with statsmodels, it is possible to know the significant variables thanks to the p-value and remove the no significant ones to have a more performant model.
import statsmodels.api as sm
# Add a constant to get an intercept
X_train_std_sm = sm.add_constant(X_train_std)
# Fit the model
log_reg = sm.Logit(y_train, X_train_std_sm).fit()
# show results
log_reg.summary()
Logit Regression Results Dep. Variable: y No. Observations: 1050
Model: Logit Df Residuals: 1043
Method: MLE Df Model: 6
Date: Wed, 17 Aug 2022 Pseudo R-squ.: 0.9562
Time: 13:26:12 Log-Likelihood: -29.285
converged: True LL-Null: -668.34
Covariance Type: nonrobust LLR p-value: 5.935e-273
coef std err z P>|z| [0.025 0.975]
const 1.9836 0.422 4.699 0.000 1.156 2.811
x1 0.1071 0.414 0.259 0.796 -0.704 0.918
x2 -0.4270 0.395 -1.082 0.279 -1.200 0.346
x3 -0.7979 0.496 -1.610 0.107 -1.769 0.173
x4 -3.5670 0.702 -5.085 0.000 -4.942 -2.192
x5 -2.1548 0.608 -3.542 0.000 -3.347 -0.962
x6 5.4692 0.929 5.885 0.000 3.648 7.291
In this case with statsmodels, I should remove 3 of my 6 variables the keep only the significant ones and then reload the model.
Is it possible to do the same with sklearn? How to know the variables to remove if p-value >5%? How to improve the logistic regression model performance with Sklearn? Do I need to implement a statsmodels and then use the correct variable to go with a model using Sklearn ?
Here my code:
from sklearn.model_selection import train_test_split
from sklearn import preprocessing
from sklearn.linear_model import LogisticRegression
from sklearn import metrics
#transform data
y = df.is_genuine.values
X = df[df.columns[1:]].values
X_name = df[df.columns[1:]].columns
# split data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42, stratify=y)
#standardize data
std_scale = preprocessing.StandardScaler().fit(X_train)
# transform X data to fit the Scaler
X_train_std = std_scale.transform(X_train)
X_test_std = std_scale.transform(X_test)
#logistic regression
reg_log = LogisticRegression(penalty='none', solver='newton-cg')
reg_log.fit(X_train_std, y_train)
#model training performance
reg_log.score(X_train_std, y_train)
>>> 0.9914285714285714
#model prediction
y_pred = reg_log.predict(X_test_std)
#test the model
pred = pd.DataFrame(X_test_std, columns=X_name)
pred['is_genuine'] = y_test
pred['pred_reglog'] = y_pred
pred['is_genuine_reglog'] = pred['pred_reglog'].apply(lambda x: True if x >0 else False)
# model evaluation
print (metrics.accuracy_score(y_test, y_pred))
>>> 0.9888888888888889
Solution
Short answer: just use statsmodels.
This question has a couple sklearn implementations of this functionality in the answers section.
You can also resort to univariate tests like sklearn.feature_selection.f_regression()
or sklearn.feature_selection.chi2()
rather than using the values of an actual model.
Answered By - dx2-66
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.