Nn functional cross_entropy error comparing probability distributions

Hi,
I tried the 2nd example in torch.nn.functional.cross_entropy β€” PyTorch 1.11.0 documentation

# Example of target with class probabilities
input = torch.randn(3, 5, requires_grad=True)
target = torch.randn(3, 5).softmax(dim=1)
loss = F.cross_entropy(input, target)
# loss.backward()

And get the following error:

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
Input In [8], in <cell line: 1>()
----> 1 loss = F.cross_entropy(input, target)
      2 loss

File ~/.virtualenvs/fwmingpt/lib/python3.8/site-packages/torch/nn/functional.py:2690, in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction)
   2688 if size_average is not None or reduce is not None:
   2689     reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 2690 return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)

File ~/.virtualenvs/fwmingpt/lib/python3.8/site-packages/torch/nn/functional.py:2385, in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction)
   2381     raise ValueError(
   2382         "Expected input batch_size ({}) to match target batch_size ({}).".format(input.size(0), target.size(0))
   2383     )
   2384 if dim == 2:
-> 2385     ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
   2386 elif dim == 4:
   2387     ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index)

RuntimeError: 1D target tensor expected, multi-target not supported

The dimensions of input and target are the same. I am new to PyTorch. Is the example wrong, or is there a bug in the code? How can I compare the cross entropy of two sets of probability distributions using torch?

Thank you for any help in advance.
Eleven

Hello! Are you sure you’re running the exact snippet you posted? It runs fine for me (see code & output below). Only other thing I can think of would be if something changed between torch 1.10 and 1.11.

import torch
from torch.nn import functional as F
print(torch.__version__)
Output:
1.10.0+cu111
input = torch.randn(3, 5, requires_grad=True)
target = torch.randn(3, 5).softmax(dim=1)
loss = F.cross_entropy(input, target)
print(loss)
Output:
tensor(2.3997, grad_fn=<DivBackward1>)
1 Like

Yes. Thank you Andrei. It was a version issue. I was running an older version of torch:

print(torch.__version__)
1.8.0+cu111

I upgraded to PyTorch to 1.11.0+cu113 and the code as posted now runs fine.

Best.
Eleven