I am getting the following error:
Background:
Each image is associated with 14 labels, with a batch size of 4.
So the size of labels is (4,14)
But when i get the predictions each image will have a single class prediction.
So the size of preds is(4,)
So how shall I rectify the following error.
running_corrects += torch.sum(preds == labels.data) RuntimeError: The size of tensor a (4) must match the size of tensor b (14) at non-singleton dimension 1
I assume that preds
are the class predictions created via e.g. torch.argmax(outputs, 1)
.
Also, it seems that your target is one-hot encoded.
If that’s the case, you should get the class indices via:
labels = torch.argmax(labels, 1)
Note, that the usage of .data
is discouraged, as it might have unwanted side effects.
1 Like