Why do white holes appear when learning images end-to-end?

Hi,

I am learning image reconstruction end-to-end

However, if you look at the validation result during the learning process, as shown in the figure below, holes often occur in the image and appear as artifacts even when learning progresses.
1|138x201
1

I’m using cnn and instance normalization and lrelu, but I’m not sure what the problem is.
L1-loss, perceptual loss, and adversarial loss are being used, and even if the weight of the l1 loss is greatly increased, the same result is obtained.

Have you experienced similar experiences and overcame any examples?

Thanks!

Hey, I have encountered a similar problem when I worked on an image colorization project. Some parts of the image were black and other parts seemed fine. I solved the problem by experimenting with different network architectures. The network on pix2pix paper worked fine if I remember correctly. Best of luck!

Thanks for the reply!!
Network changes may be one solution, but I am wondering what is causing the problem.

I solved the above problem by changing the transpose convolution normalization method.

If anyone faced a similar problem with me, it would be better to eliminate normalization or use layer norm.