How can I get the loss of individual data

As long as I understand, the loss function can provide the mean loss of batch data or the loss sum of batch data.

Currently, to get the individual loss at the evaluation mode, I use the batch size of one.

Is there any way to get the loss of each data from the batch while the batch size of larger than one is employed ?

Also, why is the loss value of the same input data different at each trial ? At the evaluation mode, are not the parameters fixed ?

You can write your own loss function instead of using the built-in one, then you can do every thing you want inside such function.

Right now there’s no other way than using single element batches or reimplementing the loss yourself, but we’re working on that. See this issue.

The issue is still open. What’s the optimal way of implementing Cross Entropy loss ?
Doing it like this means calculating the loss twice :-

def loss(y, targets):
    temp = F.softmax(y)
    loss = [-torch.log(temp[i][targets[i].data[0]]) for i in range(y.size(0))]
    return F.cross_entropy(y, targets), loss