Issue
I'm trying to apply MinMaxScaler
to each element of a numpy array, and I want it vectorized (I don't want to use a for loop).
example = np.array([[2.52163839, 2.54165282, 2.12608389, 2.54515915],
[2.29481214, 1.78448378, 2.26652405, 2.27311454],
[2.31706137, 2.29058921, 1.83225955, 2.29767736]])
I want the first element ([2.52163839, 2.54165282, 2.12608389, 2.54515915]
) scaled relative to itself (so index 2 becomes 0, index 3 becomes 1, etc.),
then the second element ([2.29481214, 1.78448378, 2.26652405, 2.27311454]
) scaled relative to itself (so index 1 becomes 0, index 0 becomes 1, etc.), and so on for the whole array.
I tried example = np.array(list(map(MinMaxScaler().fit_transform(), example)))
, but it doesn't work because MinMaxScaler requires the thing being scaled to be passed as an argument into the fit_transform
method.
Thanks!
Solution
One can use map with a lambda function. Some reshaping will be needed due to the specifications of MinmaxScaler
np.array(list( map ( lambda x: MinMaxScaler().fit_transform(\
x.reshape(-1, 1)) , example) )).reshape(3,4)
Output :
array([[0.94387462, 0.99163317, 0. , 1. ],
[1. , 0. , 0.94456885, 0.95748306],
[1. , 0.94539591, 0. , 0.96001663]])
==Edit: Addition to include @NaiveBae's own solution==
Simply scale a Transposed array:
MinMaxScaler().fit_transform( example.T ).T
This will work because MinMaxScaler
uses the following as its implementation:
X_std = (X - X.min(axis=0)) / (X.max(axis=0) - X.min(axis=0))
X_scaled = X_std * (max - min) + min
By transposing , we convert our required axis to axis 0 (first axis).
Answered By - R.S.
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.