Why does BCEWithLogitsLoss give negative values for the input?

My loss function is:

def loss_fn(inp, target):
    zeros_sum = (target == 0).sum(dim = 0).float()
    one_sum = (target == 1).sum(dim = 0).float()

    pos_weight = zeros_sum / (one_sum + 1e-2)
    loss_fn = torch.nn.BCEWithLogitsLoss(reduction='mean', pos_weight=pos_weight)

    loss = loss_fn(inp, target)

    return loss

When passing to my model, I’m passing in the result of a convolution and no Sigmoid activation:

            pred = model(inputs)
            
            loss = loss_fn(pred, outputs)

So then why does my pred have negative values? I thought Sigmoid squashes between 0 and 1?

Could you check the range of your target tensor and make sure it’s in [0, 1]?

Also, could you explain the two code snippets, as the second one uses outputs as the target tensor?

I see the issue. I was doing:

print(pred)

I should be doing print(torch.nn.Sigmoid()(pred))

I have a doubt. According to Pytorch documentation for BCEWithLogitsLoss, sigmoid calculation will be done. My question is why are you taking sigmoid again in torch.nn.Sigmoid()(pred)?

Please help me.

2 Likes

I have the same question,

why use torch.nn.Sigmoid()(pred) after using BCEwithLogitLoss? I thought BCEwithLogitLoss already used Sigmoid built in ?

nn.BCEwithLogitLoss applies the sigmoid internally and calculates the loss given the logits as inputs. If you want to use the probabilities given by the model, you would have to apply the sigmoid manually.

1 Like