I am using a DenseNet for a two class classification problem. The last layer of my network is a linear layer, so I am using the torch.nn.BCEWithLogitsLoss() loss function.
I am debugging my network since the training loss is not decreasing. I just calculated the BCE by hand and I calculated the BCE using the pytorch function, however the outputs differ.
target = tensor([1.])
linear_prediction = tensor([-0.5270], grad_fn=)
sigmoid_prediction = torch.sigmoid(linear_prediction)
print(sigmoid_prediction) → tensor([0.3712], grad_fn=)
loss = torch.nn.BCELoss(sigmoid_prediction,target)
print(loss) → tensor([0.9910], grad_fn=)
However, when I calculate the loss using the following function: loss = -ylog(x), with y=1 and x=0.3712, I get a loss of 0.4304.
Does anyone have suggestions why the two outputs could be different?
Thanks in advance!