Hi, I am trying to apply boosting with NN, and to do so I need to multiply the loss of each sample by a scalar weight.
I’m implementing it by slightly changing the BCELoss in the following way:
from torch.nn.modules.loss import BCELoss
import torch
from torch.nn import functional as F
from torch.nn.modules.loss import BCELoss
import torch
from torch.nn import functional as F
class WeightedBCELoss(BCELoss):
def __init__(self, weight=None, size_average=None, reduce=None, reduction='mean', weights_boosting=None):
super(BCELoss, self).__init__(weight, size_average, reduce, reduction)
self.weights_boosting = weights_boosting
def forward(self, input, target):
total_cross_entropy = 0
for i, sample in enumerate(input):
y_hat = input[i]
y = target[i]
c_e = F.binary_cross_entropy(y_hat, y, weight=self.weight,
reduction=self.reduction).item()
c_e = c_e * self.weights_boosting[i].item()
total_cross_entropy = total_cross_entropy + c_e
return torch.tensor([total_cross_entropy], requires_grad=True)
But afterwards my model weights do not change at all. Do you have any idea on how to fix it?
N.B. If I try to remove that “requires_grad=True” at the last line, it goes:
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
Thanks in advance