Issue
I have several columns of data that I plan to use for training a ANN regression model. Most of these columns have values ranging from 0 to 10,000.00, but one specific column has values that are always within [0,1]
range and have precision of up to 10 decimal places, eg. value: 0.1582639672
. Usually I would use MinMaxScaler
class from sklearn.preprocessing
to normalize all the values of my dataset to [0,1]
range, however I am concerned with possible precision loss when applying normalization to this specific column.
Would normalization of float values with 10 digit precision cause loss of data by producing a 'further normalized' values that might exceed the maximum digit precision that float type can faithfully represent?
Solution
Since values are already in [0,1], I guess you can exclude it manually from the normalization process and re-add the column after it's done. By this way, you can keep the precision for this specific column since normalization is done column by column so it won't affect the others.
Answered By - ah_onaly
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.