Interpreting GAN Discriminator loss

I’m using a discriminator to classify real and fake instances in terms of segmentation. The loss is Binary Cross Entropy. The implementation looks something like this:

# Adversarial ground truths
valid = Variable(Tensor(5, 1).fill_(1.0), requires_grad=False)
fake = Variable(Tensor(5, 1).fill_(0.0), requires_grad=False)

real_img = Variable(batch_labels.type(Tensor))   # Ground truth masks 
fake_img = Variable(output.type(Tensor)) # Predicted masks

bce = nn.BCELoss().cuda(GPU)

adv_loss = bce(discrimintor(fake_img),valid)   ## Measures Segmenter's ability to fool the discriminator

I came up with an approach where instead of passing the valid tensor in bce loss we use ground truth masks. Something like this:

adv_loss = bce(discrimintor(fake_img),discriminator(real_img))

This is the implementation I came up with the adversarial loss. I just wanted to verify if it’s the correct implementation for the approach.

Is the performance of the model same?

I do not think it’s the correct implementation.

I am not sure