Issue
My validation data are 150 images, but when i try to use my model to predict them my predictions are of length 22 I don't understand why?
total_v=0
correct_v=0
with torch.no_grad():
model.eval()
for data_v, target_v in (validloader):
if SK:
target_v = torch.tensor(np.where(target_v.numpy() == 2, 1, 0).astype(np.longlong))
else:
target_v = torch.tensor(np.where(target_v.numpy() == 0, 1, 0).astype(np.longlong))
data_v, target_v = data_v.to(device), target_v.to(device)
outputs_v = model(data_v)
loss_v = criterion(outputs_v, target_v)
batch_loss += loss_v.item()
_,pred_v = torch.max(outputs_v, dim=1)
correct_v += torch.sum(pred_v==target_v).item()
total_v += target_v.size(0)
val_acc.append(100 * correct_v/total_v)
val_loss.append(batch_loss/len(validloader))
network_learned = batch_loss < valid_loss_min
print(f'validation loss: {np.mean(val_loss):.4f}, validation acc: {(100 *
correct_v/total_v):.4f}\n')
this is my model
model = models.resnet50(pretrained = True)
num_ftrs = model.fc.in_features
model.fc = nn.Linear(num_ftrs, 2)
model.to(device)
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adagrad(model.parameters())
Solution
If you want to have the whole predictions you should store predictions of each individual batch and concatenate them at the end of iterations
...
all_preds = []
for data_v, target_v in validloader:
....
_,pred_v = torch.max(outputs_v, dim=1)
all_preds.append(pred_v)
....
all_preds = torch.cat(all_preds).cpu().numpy()
print(len(all_preds))
Answered By - Alka
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.