Issue
I want to use support vector machine to solve a regression problem to predict the income of teachers based on a few features which is a mixture of categorical and continuous. For example, I have [white, asian, hispanic, black], # years teaching, and years of education.
For the categorical, I utilized sci-kit's preprocessing module, and hotcoded the 4 races. In this case, it would look something like [1,0,0,0] for a white teacher, and hence I have an array of {[1,0,0,0], [0,1,0,0],...[0,0,1,0], [1,0,0,0]} representing the races of each teacher encoded for SVR. I can perform a regression with just race vs. income, i.e.:
clf= SVR(C=1.0)
clf.fit(racearray, income)
I can also perform a regression using the quantitative features as well. However, I don't know how to combine the features together, i.e.
continousarray(zip(yearsteaching,yearseduction))
clf.fit((racearray, continousarray), income)
Solution
You can use scikit-learn's OneHotEncoder. If your data are in numpy array "racearray" and the columns are
[ contionus_feature1, contious_feature2, categorical, continous_feature3]
your code should look like (keep in mind that numpy enumeration starts with 0)
from sklearn.preprocessing import OneHotEncoder
enc = OneHotEncoder(categorical_features=[2])
race_encoded = enc.fit_transform(racearay)
you then can have a look your race_encode
array as usual and use it in SVR as
clf= SVR(C=1.0)
clf.fit(race_encoded, income)
Answered By - lanenok
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.