I can not help myself asking this.
Here we have:
with torch.no_grad():
for data in testloader:
images, labels = data
output = net(images)
class_probs_batch = [F.softmax(el, dim=0) for el in output]
_, class_preds_batch = torch.max(output, 1)
class_probs.append(class_probs_batch)
class_preds.append(class_preds_batch)
but shouldn’t we have instead:
with torch.no_grad():
for data in test_iterator:
images, labels = data
output = net(images)
class_probs_batch = [F.softmax(el, dim=0) for el in output]
_, class_preds_batch = torch.max(output, 1)
class_probs.append(class_probs_batch)
#class_preds.append(class_preds_batch)
class_preds.append(labels)
?