Issue
I always put on top of my Pytorch's notebooks a cell like this:
device = (
"cuda"
if torch.cuda.is_available()
else "mps"
if torch.backends.mps.is_available()
else "cpu"
)
torch.set_default_device(device)
In this convenient way, I can use the GPU, if the system has one, MPS on a Mac, or the cpu on a vanilla system.
EDIT: Please note that, due to torch.set_default_device(device)
, any tensor is created, by default, on the device
, e.g.:
Now, I'm trying to use a Pytorch generator:
g = torch.Generator(device=device).manual_seed(1)
and then:
A = torch.randn((3, 2), generator=g)
No problem whatsoever on my Macbook (where the device is MPS) or on systems with cpu only. But on my Cuda-enabled desktop, I get:
RuntimeError: Expected a 'cpu' device type for generator but found 'cuda'
Any solution? If I just abstain from specifying the device for the generator, it will use the cpu, but then the tensor A
will be created on the CPU too...
Solution
When defining the tensor "A", it is defaulting to creating the tensor on the CPU. This results in an error, because the generator you are passing in is on the CUDA device. You can solve this by passing in the "device" parameter into torch.randn() as follows:
device = ("cuda" if torch.cuda.is_available() else "mps" if torch.backends.mps.is_available() else "cpu")
g = torch.Generator(device=device).manual_seed(1)
A = torch.randn((3, 2), device=device, generator=g)
Edit: Just noticed that you specified set_default_device() in your starting block. While this works for tensors created with torch.Tensor or similar, it doesn't work for PyTorch factory functions that return a tensor (e.g. torch.randn()). Here's the documentation to support this: https://pytorch.org/docs/stable/generated/torch.set_default_device.html.
Answered By - Inesh Loka
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.