Defining loss function inside train loop

I am trying to implement balanced binary focal loss(Paper: [1708.02002v2] Focal Loss for Dense Object Detection) , where I need to give different weight (ratio of negative to negative class) based on each input tensor.

So if I have batch size as 6, I would create a weight tensor of size 6 that is passed as below,
loss_fn = nn.BCEWithLogitsLoss(pos_weight = weight_tensor, reduction=None)

Wont it affect backpropagation if I define such loss_fn in each iteration in order to give weight_tensor dynamically based on inputs of each batch?

What I mean is, If my forward function is like below, wont it be a problem for backpropagation (i.e affecting computational graph of pytorch)?

def forward(self, img, gt_img, count_pixels):
        # ------ Some code is removed for better context to this question------

        # --------------BCE loss-----------------
        # AlphaC component in focal loss
        ratio = []
        for counts in count_pixels:
            ratio.append(counts[0]/counts[1])
        ratio = np.array(ratio)
        ratio = torch.from_numpy(ratio).to(torch.float32)

        bce_criterion = nn.BCEWithLogitsLoss(pos_weight=ratio, reduction=None)

        # --------------Focal Loss---------------
        # Yet to implement

        loss = bce_criterion(final_pred, gt_patches)

        return loss, patches, pred_pixel_values

@ptrblck please advise

Hi Muhammad!

You will not be able to backpropagate through the computation of ratio (for
three reasons) – however, I expect that you don’t want to. Assuming that
final_pred is the output of some properly differentiable model, you will be
able to backpropagate through bce_criterion and final_pred.

As an aside, for stylistic reasons, I would probably use the functional version
of BCEWithLogitsLoss, binary_cross_entropy_with_logits(), rather than
repeatedly instantiating BCEWithLogitsLoss, but, either way, you’ll be doing
the same computation.

Best.

K. Frank

1 Like