I want to get a tensor result from NLLLoss2d without reduce and average.
for example: input- NxCxHxW, target: NxHxW , i want to get a tensor like NxHxW or Nx1xHxW
because i want to calculate the loss with self-defined functions .

I noticed the impletation of nllloss2d_reference in common_nn.py

def nllloss2d_reference(input, target, weight=None, ignore_index=-100,
size_average=True, reduce=True):
N, C, H, W = input.size()
output = torch.zeros(N, H, W).type_as(input)
if isinstance(target, Variable):
target = target.data
if weight is None:
weight = torch.ones(C).type_as(input)
total_weight_data = 0
for n in range(0, N):
for h in range(0, H):
for w in range(0, W):
t_nhw = target[n][h][w]
norm = 0. if ignore_index == t_nhw else weight[t_nhw]
output[n][h][w] = -input[n][t_nhw][h][w] * norm
total_weight_data += norm
if reduce and size_average:
return output.sum() / total_weight_data
elif reduce:
return output.sum()
return output

but I want to know , if I do like this, the autograd framework will still work??

You can use reduce=False (see NLLLoss docs here) If you build pytorch from master, to get exactly this behavior. Itâ€™s much faster to use this than define your own custom function in python.

Autograd will work on the the function you provided, as long as input is a Variable and the operations done in computing the function use it as such. You should pass in a Variable input and see if the code works.