In order to ensure that I understood how BCE with logits loss works in pytorch, I tried to manually calculate the loss, however I cannot reconcile my manual calculation with the loss generated by the pytorch function F.binary_cross_entropy_with_logits.

can somebody please explain what i am doing wrong. My code is below:

import torch
import torch.nn.functional as F
import math

You need to use the natural logarithm (log-base-e) in your
cross-entropy formula. You are using log-base-2. Remove
the second argument in your calls to python’s math.log().
Thus:

As a side note, calculating log() and sigmoid() separately
can amplify numerical errors. You can avoid this by using
something like pytorch’s logsigmoid() that uses a better
algorithm internally.