How Does the BCEWithLogitsLoss is computed

Hi All,
I’m a newbie in pytorch and datascience, I was looking at the documentation of the nn.BCEWithLogitsLoss in pytroch and I wanted to replicate the formula but I couldn’t get the same result.

I have a multilabel classification problem with 31 classes.
example of two records :

y_scores = torch.tensor([[-0.0089, 0.0000, -0.0922, -0.9179, -0.0000, -0.0506, 0.6511, 0.7989,
0.1416, -0.0000, 0.4984, 0.2442, -0.8102, -0.5201, -0.1818, -0.0000, -0.1521, -0.5098, -0.0000, 0.0000, -0.0000, -0.2269, 0.1799, 0.9420, 0.4986, 0.3374, -0.8464, -0.0976, -0.0000, -0.0000, -0.6304],
[-0.0089, 0.1645, -0.0922, -0.9179, -0.2571, -0.0000, 0.6511, 0.7989, 0.0000, -0.0000, 0.4984, 0.0000, -0.8102, -0.5201, -0.1818, -0.1467,-0.1521, -0.5098, -0.0112, 0.7157, -0.2090, -0.2269, 0.1799, 0.0000, 0.4986, 0.3374, -0.0000, -0.0000, -0.4324, -0.8032, -0.6304]])

y_true = torch.tensor([[0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0.]])

So I first calculated the different parts of the BCEWithLogitsLoss formula:

y_pred = torch.sigmoid(y_scores )
y_pred_log = torch.log(y_pred_log )
y_pred_log_1 = torch.log(1-y_pred)

My_LOSS = -((y_true * y_pred_log) + (1 - y_true ) * (y_pred_log_1))
loss_fn_none = nn.BCEWithLogitsLoss(reduction=“none”)

the result is not the same however. What Am I doing wrong.

Thanks all for your help
Cheers,

The maximal difference is ~1.7e-7, which is due to limited floating point precision using FP32.
If you need more precision, you could use DoubleTensors, but your manual implementation should work fine.

Okdio Thanks a lot for your response :). Have a good day!