Multidimension NLLLoss

In NLLLoss for multiple dimensions, i see that the log probs tensor has to be arranged like (NxCxd1xd2…)
and the target as (Nxd1xd2…). Why is this required?

Why cannot Nxd1xd2…dkxC and Nxd1xd2…dkx1[on unsqueezing in last dimension] figure out the way to calculate the loss itself [as only 1 dimension is different]

Hi, is there an existing feature request for this?

It’s standard for pytorch tensors to be organized in a (N, C, <other_sizes>) fashion. For example, images would be represented as (N, C, H, W).

You could permute the dimensions to use NLLLoss:

tensor # Nxd1xd2…dkxC
tensor.permute(0, 2, 3, ..., 1)
1 Like