Strange Unet artifact

I am using a Unet model (a simple encoder with average pooling and a decoder using ConvTranspose2d function) for image upsampling (super-resolution). A pair of images (input on the left and target on the middle) is used for training as shown below. Every other column from the target image is removed and zero-padded to come up with the input image.

During training, I could see the vertical line artifacts (image on the right) on the reconstructed image (model output) upon zooming in.

What can be done to rectify the artifacts? I am using L1 and L2 norms as loss functions. The training set contains 6000, 672 x 1024 grayscale images.

Your output and target look close to the same, just flipped vertically. You sure you aren’t flipping one of them before viewing?

Pretty big images, are you patching them (if so experiment a bit more here?) or feeding them whole?

@J_Johnson Yes, the output looks close to the target. I have uploaded the full-size validation target and output images at Unet — ImgBB. You can see the artifacts when you zoom in on the output.

@Soumya_Kundu, I am feeding the whole image. I will give reducing/patching the images a try and see if that works.

Thanks for the response guys!