I’m trying to solve a binary classification problem ( target=0
and target=1
) with an exception: Some of my labels are classified as target=0.5
on purpose, and I wish to have zero loss for either classifying it as 0 or 1 (i.e both classes are “correct”).
I tried to implement a custom loss from scratch, based on PyTorch’s BCEWithLogitsLoss:
class myLoss(torch.nn.Module):
def __init__(self, pos_weight=1):
super().__init__()
self.pos_weight = pos_weight
def forward(self, input, target):
epsilon = 10 ** -44
my_bce_loss = -1 * (self.pos_weight * target * F.logsigmoid(input + epsilon)
+ (1 - target) * log(1 - sigmoid(input) + epsilon))
add_loss = (target - 0.5) ** 2 * 4
mean_loss = (my_bce_loss * add_loss).mean()
return mean_loss
epsilon
was chosen so the log will be bounded to -100, as suggested in BCE loss.
However I’m still getting NaN errors:
Function 'LogBackward' returned nan values in its 0th output.
or
Function 'SigmoidBackward' returned nan values in its 0th output.
Any suggestions how can I correct my loss function? maybe by somehow inherit and modify forward
function?