Hi there,

During the creation of one of my networks I stumbled across something which I found rather weird. I wanted to check the accuracy of my network by doing

Pred == Labels

I would expect to get back a 1D tensor with the size of my batch. Instead I got a tensor of size batch x batch. I don’t really get why this happens though. When looking deeper I found that Pred would be a tensor of size batch and Labels a tensor of size (batch x 1). I would’ve thought that a tensor of size n and a tensor of size (n x 1) would get treated the same but apparently it isn’t. Can anyone tell me why this is?

If you want to test this yourself try this piece of code

import torch

a = torch.randn(16)

b = torch.randn((16,1))a.random_(1,16) #just to get some values the same

b.random_(1,16)a == b # weird?

a == b.view(16) #expected output

The way i had to solve this was by reshaping the Labels by using view to drop that extra dimension (see example code).

Thanks in advance!