Interpreting GAN Discriminator loss

I’m using a discriminator to classify real and fake instances in terms of segmentation. The loss is Binary Cross Entropy. The implementation looks something like this:

# Adversarial ground truths
valid = Variable(Tensor(5, 1).fill_(1.0), requires_grad=False)
fake = Variable(Tensor(5, 1).fill_(0.0), requires_grad=False)

real_img = Variable(batch_labels.type(Tensor))   # Ground truth masks 
        
fake_img = Variable(output.type(Tensor)) # Predicted masks

bce = nn.BCELoss().cuda(GPU)

adv_loss = bce(discrimintor(fake_img),valid)   ## Measures Segmenter's ability to fool the discriminator

I came up with an approach where instead of passing the valid tensor in bce loss we use ground truth masks. Something like this:

adv_loss = bce(discrimintor(fake_img),discriminator(real_img))

This is the implementation I came up with the adversarial loss. I just wanted to verify if it’s the correct implementation for the approach.

In GANs, the discriminator is typically trained to distinguish between real and fake samples.

It can indeed be trained for this using binary cross entropy.

However, I have a hard time understanding the rationale behind your implementation.

What is the goal? Are you attempting to implement standard GAN training? Or is your use case a special one?

Could you please explain the task and the desired behavior in detail? It is difficult to help otherwise.