Issue
import torch
torch.set_printoptions(precision=1, sci_mode=False)
numeric_seq_id = 2021080918959999952
t = torch.tensor(numeric_seq_id)
tt = torch.tensor(numeric_seq_id).float() # !!!
print(t, tt)
output is
tensor(2021080918959999952) tensor(2021080905052848128.)
We could see that tt
's value is changed after .float()
transform.
Why is here such a difference on the value?
ps. pytorch's version = 1.10.1
python's version = 3.8
Solution
This is not pytorch
specific, but an artifact of how floats (or doubles) are represented in memory (see this question for more details), which we can also see in numpy
:
import numpy as np
np_int = np.int64(2021080918959999952)
np_float = np.float32(2021080918959999952)
np_double = np.float64(2021080918959999952)
print(np_int, int(np_float), int(np_double))
Output:
2021080918959999952 2021080905052848128 2021080918960000000
Answered By - FlyingTeller
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.