Issue
I have a list with tensors:
[tensor([[0.4839, 0.3282, 0.1773, ..., 0.2931, 1.2194, 1.3533],
[0.4395, 0.3462, 0.1832, ..., 0.7184, 0.4948, 0.3998]],
device='cuda:0'),
tensor([[1.0586, 0.2390, 0.2315, ..., 0.9662, 0.1495, 0.7092],
[0.6403, 0.0527, 0.1832, ..., 0.1467, 0.8238, 0.4422]],
device='cuda:0')]
I want to stack all [1xfeatures] matrices into one by np.concatenate(X). but this error appears:
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
How to fix it?
Solution
Your tensors are still on the GPU and numpy operations happen on CPU. You can either send both tensors back to cpu first numpy.concatenate((a.cpu(), b.cpu())
, as the error message indicates.
Or you can avoid moving off the GPU and use a torch.cat()
a = torch.ones((6),)
b = torch.zeros((6),)
torch.cat([a,b], dim=0)
# tensor([1., 1., 1., 1., 1., 1., 0., 0., 0., 0., 0., 0.])
Answered By - user3474165
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.