NLLLoss mismatch with pytorch implementation

Hi all,
So I am trying to implement NllLoss function and completely lost within dimensions. Here is my attempt. Could someone please tell me what am I missing?

import torch
import torch.nn as nn
import torch.nn.functional as F


log_softmax = nn.LogSoftmax(dim=1)

torch.random.manual_seed(100)
logits = torch.randn([4, 2, 3, 1])

torch.random.manual_seed(200)
target = torch.randint_like(logits[:,1,:], 0, 2)

logp = log_softmax(logits)
logp1 =  logits - torch.log(logits.exp().sum(1).unsqueeze(1)) #Log Softmax
print('Pytorch NllLoss:', F.nll_loss(logp, target.long()), F.nll_loss(logp1, target.long()))

loss1 = target * logp[:,0,:,:]
loss2 = (1.0 - target) * (1.0 - logp[:,1,:,:])

loss = loss1 + loss2

print('My Loss',torch.mean(loss))

Output:
Pytorch NllLoss: tensor(0.7951) tensor(0.7951)
My Loss tensor(0.7221)

Thanks!

1 Like

I think the NllLoss should be:

loss = 0.
for i, target_i in enumerate(target.long()):
	for j, target_j in enumerate(target_i):
		loss = loss - logp[i][target_j.item()][j]
print('My Loss', loss / (4*3*1))

high dim NLLLoss calcucation is abstract, but it’s always the same idea that selecting the value according to target in dim 1(starting from dim 0).

logsoftmax is implemented correct, and I can’t figure out a better way to implement NllLoss of arbitary dimension now.