Issue
I already know "xgboost.XGBRegressor
is a Scikit-Learn Wrapper interface for XGBoost."
But do they have any other difference?
Solution
xgboost.train
is the low-level API to train the model via gradient boosting method.
xgboost.XGBRegressor
and xgboost.XGBClassifier
are the wrappers (Scikit-Learn-like wrappers, as they call it) that prepare the DMatrix
and pass in the corresponding objective function and parameters. In the end, the fit
call simply boils down to:
self._Booster = train(params, dmatrix,
self.n_estimators, evals=evals,
early_stopping_rounds=early_stopping_rounds,
evals_result=evals_result, obj=obj, feval=feval,
verbose_eval=verbose)
This means that everything that can be done with XGBRegressor
and XGBClassifier
is doable via underlying xgboost.train
function. The other way around it's obviously not true, for instance, some useful parameters of xgboost.train
are not supported in XGBModel
API. The list of notable differences includes:
xgboost.train
allows to set thecallbacks
applied at end of each iteration.xgboost.train
allows training continuation viaxgb_model
parameter.xgboost.train
allows not only minization of the eval function, but maximization as well.
Answered By - Maxim
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.