Issue
I have some existing PyTorch codes with cuda()
as below, while net
is a MainModel.KitModel
object:
net = torch.load(model_path)
net.cuda()
and
im = cv2.imread(image_path)
im = Variable(torch.from_numpy(im).unsqueeze(0).float().cuda())
I want to test the code in a machine without any GPU, so I want to convert the cuda-code into CPU version. I tried to look at some relevant posts regarding the CPU/GPU switch of PyTorch, but they are related to the usage of device
and thus doesn't apply to my case.
Solution
As pointed out by kHarshit in his comment, you can simply replace .cuda()
call with .cpu()
:
net.cpu()
# ...
im = torch.from_numpy(im).unsqueeze(0).float().cpu()
However, this requires changing the code in multiple places every time you want to move from GPU to CPU and vice versa.
To alleviate this difficulty, pytorch has a more "general" method .to()
.
You may have a device
variable defining where you want pytorch to run, this device
can also be the CPU (!).
for instance:
if torch.cuda.is_available():
device = torch.device("cuda")
else:
device = torch.device("cpu")
Once you determined once in your code where you want/can run, simply use .to()
to send your model/variables there:
net.to(device)
# ...
im = torch.from_numpy(im).unsqueeze(0).float().to(device)
BTW,
You can use .to()
to control the data type (.float()
) as well:
im = torch.from_numpy(im).unsqueeze(0).to(device=device, dtype=torch.float)
PS,
Note that the Variable
API has been deprecated and is no longer required.
Answered By - Shai
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.