Problem with GAN(Pip2Pix) discriminator and generator loss

I would recommend to compare the number of parameters between both models.
Unfortunately, I don’t know, how to make the Keras code executable for both models.

As the next step, I would compare the output shapes, as e.g. your Keras model uses

layer_input = np.random.randn(1, 24, 24, 1)
validity = tf.keras.layers.Conv2D(1, kernel_size=(4, 1), strides=1, padding='same')(layer_input)
print(validity.shape)
> TensorShape([1, 24, 24, 1])

as the last conv layer in the discriminator, which doesn’t seem to output the same shape in the PyTorch code for a dummy input:

x = torch.randn(1, 1, 24, 24)
conv = nn.Conv2d(1, 1, kernel_size=(4, 1), padding=(1, 0), stride=1)
out = conv(x)
print(out.shape)
>  torch.Size([1, 1, 23, 24])

Once this is done, I would look into the layer initializations, as they are most likely different between the frameworks.