Yakirs
August 16, 2018, 11:49am
1
I’m working on a semantic segmentation project.
I can get the loss using this (simplified):
loss = nn.BCELoss(predicted_weights, gt_weights)
But I want to get the loss per pixel. I want to know how much loss each pixel contributed to the loss, and change the weights between them before I’m doing the backward propagation.
Thanks
vfdev-5
(vfdev-5)
August 16, 2018, 11:59am
2
@Yakirs Maybe you can set reduction='none'
. For example,
import torch
import torch.nn as nn
y = torch.rand(4, 10, 10)
y_true = torch.randint(0, 1, size=(4, 10, 10))
loss = nn.BCELoss(reduction='none')
loss(y, y_true).shape
> torch.Size([4, 10, 10])
hi but for a multi-dimension output how would plan on doing backpropagation