Understanding NLLLoss function

Hi Calin!

Please see (if I understand what you are asking) the description of
the “K-dimensional case” in the documentation for NLLLoss.

Here is an illustrative (pytorch 0.3.0) script:

import torch
torch.__version__

torch.manual_seed (2020)

nBatch = 2
nClass = 4
width = 3
height = 5
input = torch.randn (nBatch, nClass, width, height)
target = torch.multinomial (torch.ones (nClass) / nClass, nBatch * width * height, replacement = True).resize_ (nBatch, width, height)

input.shape
target.shape
target.min()
target.max()

input = torch.autograd.Variable (input)
input = torch.nn.functional.log_softmax (input, dim = 1)
target = torch.autograd.Variable (target)

torch.nn.NLLLoss() (input, target)

And here is the output:

>>> import torch
>>> torch.__version__
'0.3.0b0+591e73e'
>>>
>>> torch.manual_seed (2020)
<torch._C.Generator object at 0x00000170D6456630>
>>>
>>> nBatch = 2
>>> nClass = 4
>>> width = 3
>>> height = 5
>>> input = torch.randn (nBatch, nClass, width, height)
>>> target = torch.multinomial (torch.ones (nClass) / nClass, nBatch * width * height, replacement = True).resize_ (nBatch, width, height)
>>>
>>> input.shape
torch.Size([2, 4, 3, 5])
>>> target.shape
torch.Size([2, 3, 5])
>>> target.min()
0
>>> target.max()
3
>>>
>>> input = torch.autograd.Variable (input)
>>> input = torch.nn.functional.log_softmax (input, dim = 1)
>>> target = torch.autograd.Variable (target)
>>>
>>> torch.nn.NLLLoss() (input, target)
Variable containing:
 1.9742
[torch.FloatTensor of size 1]

Note that target has one less dimension than input. In particular,
target does not have an nClass dimension, while input does.

Best.

K. Frank

1 Like