Can the discriminator of a GAN produce high probability values for both real and fake images?

I am training a GAN model for image style transfer. The discriminator tries to classify between real and stylized images. Ideally, the discriminator should give high probability values for the real images and low values for the stylized images as the training starts. These are some of the values I got:

For real images: [[[0.8925122618675232]], [[0.8756555318832397]], [[0.8772845268249512]], [[0.8345637321472168]]]
For corresponding stylized images: [[[0.4710726737976074]], [[0.47152334451675415]], [[0.471288800239563]], [[0.47164005041122437]]

But as the training proceeds, after around 60 iterations, the discriminator starts producing weird results and is stuck at probability 1.0 for the stylized images:

For real images after 60 iterations: [[[0.9994499683380127]], [[0.9997959136962891]], [[0.9631960391998291]], [[0.9862018823623657]]]
For corresponding stylized images after 60 iterations: [[[1.0]], [[1.0]], [[1.0]], [[1.0]]

I trained upto 50000 iterations and it is still stuck at 1.0. The adversarial loss function I used is:
where x is the real images and G(x,y) is the stylized image.
Please provide suggestions on when and why the discriminator behaves in this manner? Is there a way to rectify this issue?