Issue
If I simply import the Resnet Model from Pytorch in Colab, and use it to train my dataset, there are no issues. However, when I try to change the last FC layer to change the output features from 1000 to 9, which is the number of classes for my datasets, the following error is obtained.
RuntimeError: Tensor for 'out' is on CPU, Tensor for argument #1 'self' is on CPU, but expected them to be on GPU (while checking arguments for addmm)
Working version:
import torchvision.models as models
#model = Net()
model=models.resnet18(pretrained=True)
# defining the optimizer
optimizer = Adam(model.parameters(), lr=0.07)
# defining the loss function
criterion = CrossEntropyLoss()
# checking if GPU is available
if torch.cuda.is_available():
model = model.cuda()
criterion = criterion.cuda()
Version with error:
import torchvision.models as models
#model = Net()
model=models.resnet18(pretrained=True)
# defining the optimizer
optimizer = Adam(model.parameters(), lr=0.07)
# defining the loss function
criterion = CrossEntropyLoss()
# checking if GPU is available
if torch.cuda.is_available():
model = model.cuda()
criterion = criterion.cuda()
model.fc = torch.nn.Linear(512, 9)
Error occurs in the stage where training occurs, aka
outputs = model(images)
How should I go about fixing this issue?
Solution
Simple error, the fc layer should be instantiated before declaring model as cuda. I.e
model=models.resnet18(pretrained=True)
model.fc = torch.nn.Linear(512, 9)
if torch.cuda.is_available():
model = model.cuda()
Answered By - pang54
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.