Issue
This is more of a theoretical question. I'm working with the scikit-learn package to perform some NLP task. Sklearn provides many methods to perform both feature selection and setting of a model parameters. I'm wondering what I should do first.
If I use univariate feature selection, it's pretty obvious that I should do feature selection first and, with the selected features, I then tunne the parameters of the estimator.
But what if I want to use recursive feature elimination? Should I first set the parameters with grid search using ALL the original features and just then perform feature selection? Or perhaps I should select the features first (with the estimator's default parameters) and then set the parameters with the selected features?
EDIT
I'm having pretty much the same problem stated here. By that time, there wasn't a solution to it. Does anyone know if it exists one now?
Solution
Personally I think RFE is overkill and too expensive in most cases. If you want to do feature selection on linear models, use univariate feature selection, for instance with chi2 tests or L1 or L1 + L2 regularized models with grid searched regularization parameter (usually named C
or alpha
in sklearn models).
For highly non-linear problems with a lot of samples you should try RandomForestClassifier
, ExtraTreesClassifier
or GBRT models and grid searched parameters selection (possibly using OOB score estimates) and use the compute_importances
switch to find a ranking of features by importance and use that for feature selection.
For highly non-linear problems with few samples I don't think there is a solution. You must be doing neurosciences :)
Answered By - ogrisel
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.