How to obtain element wise loss in a batch?

Hello all, I have a requirement where I need to get an element wise loss in a batch of data and my model returns loss in training mode. What is the best way to obtain this?

I am doing the following way.
loss0 = model(batch[0])
loss1 = model(batch[1])
loss2 = model(batch[2])
loss3 = model(batch[3])

loss = w0loss0 + w1loss1 + w2loss2 + w3loss3

loss.backward()

my code executes perfectly alright without errors. Am I missing anything here or I am doing right in getting the weighted loss function with different weight to the loss corresponding to the each element of batch? Please suggest.

The way you were implementing it works but may have effect on performance.
You can use reduction = 'none' in most of the loss functions to get what you want.

Thanks @mMagmer for your reply. What aspects of performance will be impacted? Do you mean the GPU power is not utilised completely by the way I am implementing?

Yes, it has an effect on GPU utilization.
But, some layers, like batch normalization, require large batch size for better performance in i.e. model accuracy.

Thanks very much @mMagmer for the clarification.