Weird error of NLLLoss2d

This work

m = nn.Conv2d(16, 32, (3, 3)).float()
loss = nn.NLLLoss2d()
input = autograd.Variable(torch.randn(3, 16, 10, 10))
target = autograd.Variable(torch.LongTensor(3, 8, 8).random_(0, 4))
input = m(input)
output = loss(input, target)

But this one do not work

m = nn.Conv2d(16, 32, (3, 3)).float()
loss = nn.NLLLoss2d()
input = autograd.Variable(torch.randn(3, 16, 10, 10))    
target = np.arange(3*32*8*8).reshape(3,32,8,8).astype('int64')
target = Variable(torch.from_numpy(target))
input = m(input)
output = loss(input, target)

Input and target of both have the same shape and type, but the later one(transformed from ndarray) always throw runtime error

RuntimeError: only batches of spatial targets supported (3D tensors) but got targets of dimension: 4 at /b/wheel/pytorch-src/torch/lib/THNN/generic/SpatialClassNLLCriterion.c:39

Why this happen?What is the correct way to convert from numpy array to torch tensor?Thanks

I think your loss function input argument is wrong. As shown in pytorch documents, the shape of input is BatchCHW, and the target is BatchH*W. http://pytorch.org/docs/master/nn.html?highlight=torch%20nn%20nll#torch.nn.NLLLoss2d

1 Like