Compute loss for each batch?

I have an input size of BxCxHxW and a label size of BxHxW, where B is the batch size. We often compute the loss likes

criterion= nn.CrossEntropyWithLoss()
pred = model(input)
loss = criterion(pred, label)

If I want to compute the loss for each batch, then I will use

criterion= nn.CrossEntropyWithLoss()
pred = model(input)
loss = 0
for i in range (B):
    loss += criterion(pred[i:i+1,...], label[i:i+1,...])

Does the second approach provide same result as the first approach? Thanks

It won’t produce the same loss, as the default reduction in nn.CrossEntropyLoss calculates the mean loss value for the batch.
If you set reduction='sum', you should get the same loss.
However, if you need the loss for each batch, just disable the reduction via reduction='none' (related topic).

1 Like