Issue
I'm using GridSearchCV
to hyperparameter tune my machine learning results:
grid_search = GridSearchCV(estimator=xg_clf, scoring='f1', param_grid=param_grid, n_jobs=-1, cv=kfold)
However, my supervisor wants me to use the Matthews Coefficient to score, which unfortunately is not one of the available options:
>>> sorted(sklearn.metrics.SCORERS.keys())
['accuracy', 'adjusted_mutual_info_score', 'adjusted_rand_score', 'average_precision', 'balanced_accuracy', 'completeness_score', 'explained_variance', 'f1', 'f1_macro', 'f1_micro', 'f1_samples', 'f1_weighted', 'fowlkes_mallows_score', 'homogeneity_score', 'jaccard', 'jaccard_macro', 'jaccard_micro', 'jaccard_samples', 'jaccard_weighted', 'max_error', 'mutual_info_score', 'neg_brier_score', 'neg_log_loss', 'neg_mean_absolute_error', 'neg_mean_absolute_percentage_error', 'neg_mean_gamma_deviance', 'neg_mean_poisson_deviance', 'neg_mean_squared_error', 'neg_mean_squared_log_error', 'neg_median_absolute_error', 'neg_root_mean_squared_error', 'normalized_mutual_info_score', 'precision', 'precision_macro', 'precision_micro', 'precision_samples', 'precision_weighted', 'r2', 'rand_score', 'recall', 'recall_macro', 'recall_micro', 'recall_samples', 'recall_weighted', 'roc_auc', 'roc_auc_ovo', 'roc_auc_ovo_weighted', 'roc_auc_ovr', 'roc_auc_ovr_weighted', 'top_k_accuracy', 'v_measure_score']
I've read Demonstration of multi-metric evaluation on cross_val_score and GridSearchCV in the docs but it doesn't look like this can be done easily.
How can I using the Matthews Coefficient in scoring with GridSearchCV?
Solution
Help on function make_scorer in module sklearn.metrics._scorer:
make_scorer(score_func, *, greater_is_better=True, needs_proba=False, needs_threshold=False, **kwargs)
Make a scorer from a performance metric or loss function.
This factory function wraps scoring functions for use in
:class:`~sklearn.model_selection.GridSearchCV` and
:func:`~sklearn.model_selection.cross_val_score`.
It takes a score function, such as :func:`~sklearn.metrics.accuracy_score`,
:func:`~sklearn.metrics.mean_squared_error`,
so the solution is to simply use make_scorer
and call the appropriate function from sklearn.metrics
:
grid_search = GridSearchCV(estimator=xg_clf, scoring=make_scorer(matthews_corrcoef), param_grid=param_grid, n_jobs=-1, cv=kfold)
Answered By - con
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.