BCE loss output is not corresponding to calculation by hand

Hi there,

I am using a DenseNet for a two class classification problem. The last layer of my network is a linear layer, so I am using the torch.nn.BCEWithLogitsLoss() loss function.
I am debugging my network since the training loss is not decreasing. I just calculated the BCE by hand and I calculated the BCE using the pytorch function, however the outputs differ.

target = tensor([1.])
linear_prediction = tensor([-0.5270], grad_fn=)
sigmoid_prediction = torch.sigmoid(linear_prediction)
print(sigmoid_prediction) → tensor([0.3712], grad_fn=)
loss = torch.nn.BCELoss(sigmoid_prediction,target)
print(loss) → tensor([0.9910], grad_fn=)

However, when I calculate the loss using the following function: loss = -ylog(x), with y=1 and x=0.3712, I get a loss of 0.4304.

Does anyone have suggestions why the two outputs could be different?

Thanks in advance!

Hi Iv!

Your notation here (“log(x)”) is ambiguous, but you are using the
so-called common logarithm (log-base-10) in your calculation.
You should be using the "natural logarithm* (log-base-e), instead.
The natural logarithm is the correct choice (and is what pytorch is

Note that pytorch’s torch.log() and python’s math.log() give you the
natural logarithm. (They both use log10() for the common logarithm.)


K. Frank

Ofcourse, I mixed those two up. Thanks for the reply!