I am trying to implement balanced binary focal loss(Paper: [1708.02002v2] Focal Loss for Dense Object Detection) , where I need to give different weight (ratio of negative to negative class) based on each input tensor.

So if I have batch size as 6, I would create a weight tensor of size 6 that is passed as below,

`loss_fn = nn.BCEWithLogitsLoss(pos_weight = weight_tensor, reduction=None)`

Wont it affect backpropagation if I define such `loss_fn`

in each iteration in order to give weight_tensor dynamically based on inputs of each batch?

What I mean is, If my forward function is like below, wont it be a problem for backpropagation (i.e affecting computational graph of pytorch)?

```
def forward(self, img, gt_img, count_pixels):
# ------ Some code is removed for better context to this question------
# --------------BCE loss-----------------
# AlphaC component in focal loss
ratio = []
for counts in count_pixels:
ratio.append(counts[0]/counts[1])
ratio = np.array(ratio)
ratio = torch.from_numpy(ratio).to(torch.float32)
bce_criterion = nn.BCEWithLogitsLoss(pos_weight=ratio, reduction=None)
# --------------Focal Loss---------------
# Yet to implement
loss = bce_criterion(final_pred, gt_patches)
return loss, patches, pred_pixel_values
```

@ptrblck please advise