Issue
I understand that both LinearRegression
class and SGDRegressor
class from scikit-learn
performs linear regression. However, only SGDRegressor
uses Gradient Descent as the optimization algorithm.
Then what is the optimization algorithm used by LinearRegression
, and what are the other significant differences between these two classes?
Solution
LinearRegression always uses the least-squares as a loss function.
For SGDRegressor you can specify a loss function and it uses Stochastic Gradient Descent (SGD) to fit. For SGD you run the training set one data point at a time and update the parameters according to the error gradient.
In simple words - you can train SGDRegressor on the training dataset, that does not fit into RAM. Also, you can update the SGDRegressor model with a new batch of data without retraining on the whole dataset.
Answered By - Danylo Baibak
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.