GAN generated image is correct but too light


I’ve been trying to use cyclegan for image inpainting, passing in real A (which has a small black mask in the bottom right corner for the second example), getting the result fake B, and the ground truth being real B. I also generate fake A, which is what is generated by passing in Fake B into the reverse generator, and it is also too light, but didnt show it here to not complicate this post. The point is that both images synthesized by the generator seem to be a shade too light and I’ve experimented with other datasets/other tasks with cyclegan and this pattern is consistent. Anyone have any idea what could be happening?

The last layer is tanh so range -1 to 1, so I add 1, divide by 2, and multiply by 255 to get it into the correct range. I also make sure to normalize the input image for my losses, and I do this by normalizing their range to -1 to 1, so that the output of the generator and the input image’s losses are correct.

i think you should check the input of the tanh layer first.
I guess they are (nearly) all postive , so the output pixels of you generator ranges from 0 to 1 (rather than -1 to 1), after you denorm them , the synthesized img ranges from 127 to 255 ( equal to add 127 to origin img ), so the result seems lighter than the GT.

That makes perfect sense thank you but unfortunately I don’t think that is the case because I checked the max and min values of the outputed fake_A and fake_B generated before I denormalized (changing range from -1 to 1 to 0 to 255, attached screenshot of function called convert_from_tanh), and the values of the generated image’s max and min are indeed -.99 to .99 so it seems like the range is correct?


Hi! I’m experiencing the same issue and the range also seems to be okay, so I’m curious whether you have figured anything out in the meantime?