# BCELoss from scratch

How can I define BCEloss for intermediary layers of an autoencoder ? I can’t use the already existing function, since that one expects that one of the inputs be labels, not variables. Any ideas?

just write it out with autograd ops, i.e. torch.* operations.

``````def example_loss(x, t):
loss = ((x - t)^^2).sum()
return loss
``````

Thanks but that’s MSE loss. That one is easy to implement. I was asking about cross entropy.

Would

``````CE_Loss = - (y*torch.log(a) + (1-y)*torch.log(1-a)).sum().mean()
``````

give the same thing?

Hi @Bixqu,

in principle yes. Generally, it is recommended that you apply a small regularization for the logarithm.
If you try:

``````lossfunc = torch.nn.BCELoss()
for eps in [1e-11,1e-12,1e-13]:
pred=Variable(torch.FloatTensor([1.0,0.0]))
label=Variable(torch.FloatTensor([0.0,1.0]))
ownloss=-(label*pred.clamp(min=eps).log()+(1-label)*(1-pred).clamp(min=eps).log()).mean()
print("Reg: {} BCELoss: {:.4f} own BCELoss {:.4f}".format(eps, lossfunc(pred, label).data[0], ownloss.data[0]))
``````

gives

``````Reg: 1e-11 BCELoss: 27.6310 own BCELoss 25.3284
Reg: 1e-12 BCELoss: 27.6310 own BCELoss 27.6310
Reg: 1e-13 BCELoss: 27.6310 own BCELoss 29.9336
``````

(If you are a friend of multilabel, you might sum over the labels and only take the mean over the batch or somesuch.)

Disclaimer: I don’t have any actual knowledge of the implementation of BCELoss.

Best regards

Thomas

2 Likes

Thanks for the reply. I want to apply this as the loss function of an autoencoder so I think it should work.

Hello everybody,
do you know where I can find the exact implementation of the nn.BCELoss() of PyTorch?
I’m trying to implement a binary classificator with a slightly modified loss.
Thanks a lot and best regards,
filippo portera

This one here?

Thanks but does it refer to F.binary_cross_entropy()?
I couldn’t find the detailed implementation.
The lowest level I reached is:

torch._C._nn.binary_cross_entropy()

The `forward` of `nn.BCELoss` directs to `F.binary_cross_entropy()` which further takes you to `torch._C._nn.binary_cross_entropy()` (the lowest you’ve reached).

You can find the CPU implementation of the forward method of `binary_cross_entropy` here (and the backward right below it).