Issue
Is there a way to fit an sklearn Random Forest Regressor such that an all 0 input will give me a 0 prediction? For linear models, I know I can simply pass in a fit_intercept=False
argument upon initialization, and I want to replicate this for the random forest.
Does it make sense for a tree-based model to achieve what I'm trying to do? If so, how do I implement this?
Solution
Short answer: No.
Long answer:
Tree-based models are very different animals from linear ones; the notion of intercept does not even exist in trees.
To get some intuition on why is so, let's adapt the simple example from the documentation (one decision tree with a single input feature):
import numpy as np
from sklearn.tree import DecisionTreeRegressor, plot_tree
import matplotlib.pyplot as plt
# Create a random dataset
rng = np.random.RandomState(1)
X = np.sort(5 * rng.rand(80, 1), axis=0)
y = np.sin(X).ravel()
y[::5] += 3 * (0.5 - rng.rand(16))
# Fit regression model
regr = DecisionTreeRegressor(max_depth=2)
regr.fit(X, y)
# Predict
X_test = np.arange(0.0, 5.0, 0.01)[:, np.newaxis]
y_pred = regr.predict(X_test)
# Plot the results
plt.figure()
plt.scatter(X, y, s=20, edgecolor="black",
c="darkorange", label="data")
plt.plot(X_test, y_pred, color="cornflowerblue",
label="max_depth=2", linewidth=2)
plt.xlabel("data")
plt.ylabel("target")
plt.title("Decision Tree Regression")
plt.legend()
plt.show()
Here is the output:
Roughly speaking, decision trees attempt to approximate the data locally, hence any global attempt (such as an intercept line) does not exist in their universe.
What a regression tree actually returns as output is the mean value of the dependent variable y
of the training samples that end up in the respective terminal nodes (leaves) during fitting. To see this, let's plot the tree we have just fitted above:
plt.figure()
plot_tree(regr, filled=True)
plt.show()
Traversing the tree in this very simple toy example, you should be able to convince yourself that the prediction for X=0
is 0.052
(left arrows are the True
condition of the nodes). Let's verify this:
regr.predict(np.array([0]).reshape(1,-1))
# array([0.05236068])
I illustrated the above with a very simple decision tree to give you an idea of why the notion of an intercept does not exist here; the conclusion that this is also the case with any model that is actually based upon and consists of decision trees (like the Random Forest) should be straightforward.
Answered By - desertnaut
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.