Get loss as a map of loss per pixel

I’m working on a semantic segmentation project.

I can get the loss using this (simplified):

loss = nn.BCELoss(predicted_weights, gt_weights)

But I want to get the loss per pixel. I want to know how much loss each pixel contributed to the loss, and change the weights between them before I’m doing the backward propagation.

Thanks

@Yakirs Maybe you can set reduction='none'. For example,

import torch
import torch.nn as nn

y = torch.rand(4, 10, 10)
y_true = torch.randint(0, 1, size=(4, 10, 10))
loss = nn.BCELoss(reduction='none')
loss(y, y_true).shape

> torch.Size([4, 10, 10])

hi but for a multi-dimension output how would plan on doing backpropagation