[solved] How can nn.BCEloss handle both tanh and sigmoid activation functions?

I’m beginner of GAN and there is some confusion with nn.BCEloss function.
As far as I have known, binary cross entropy is -1* (y*log(a) + (1-a)*log(1-a)).
It is for sigmoid activationfunction which makes output in range from 0 to 1.

Here is my questions

  1. In my search, bce for tanh function is
    -.5 * ( (1-y)*log(1-a) + (1+y)*log(1+a) ) + log(2).
    is my search right?

  2. In many DCGAN implementations, both the discriminator using sigmoid and the generator using tanh both use the nn.bceloss function.
    Then, how can this nn.BCEloss() function work on both of them without any aditional parameters.

Here is a nice reference code that I got help with.
link : https://github.com/pytorch/examples/blob/master/dcgan/main.py

#===========================================
criterion = nn.BCELoss()
optimizerD = optim.Adam(netD.parameters(), lr=opt.lr, betas=(opt.beta1, 0.999))
optimizerG = optim.Adam(netG.parameters(), lr=opt.lr, betas=(opt.beta1, 0.999))
#===========================================

Thank you for your interest.

I’m sorry. It’s my misunderstaning. I solved this.

May i know what did you misunderstand? I also could not understand why nn.BCEloss() can be used with tanh instead of sigmoid