Issue
In scikit-learn, all estimators have a fit()
method, and depending on whether they are supervised or unsupervised, they also have a predict()
or transform()
method.
I am in the process of writing a transformer for an unsupervised learning task and was wondering if there is a rule of thumb where to put which kind of learning logic. The official documentation is not very helpful in this regard:
fit_transform(X, y=None, **fit_params)
Fit to data, then transform it.
In this context, what is meant by both fitting data and transforming data?
Solution
Fitting finds the internal parameters of a model that will be used to transform data. Transforming applies the parameters to data. You may fit a model to one set of data, and then transform it on a completely different set.
For example, you fit a linear model to data to get a slope and intercept. Then you use those parameters to transform (i.e., map) new or existing values of x
to y
.
fit_transform
is just doing both steps to the same data.
A scikit example: You fit data to find the principal components. Then you transform your data to see how it maps onto these components:
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
X = [[1,2],[2,4],[1,3]]
pca.fit(X)
# This is the model to map data
pca.components_
array([[ 0.47185791, 0.88167459],
[-0.88167459, 0.47185791]], dtype=float32)
# Now we actually map the data
pca.transform(X)
array([[-1.03896057, -0.17796634],
[ 1.19624651, -0.11592512],
[-0.15728599, 0.29389156]])
# Or we can do both "at once"
pca.fit_transform(X)
array([[-1.03896058, -0.1779664 ],
[ 1.19624662, -0.11592512],
[-0.15728603, 0.29389152]], dtype=float32)
Answered By - inversion
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.