Issues with model behaviour when using .eval() mode

I need some help understanding a strange issue I’ve encountered with the ‘.eval()’ mode for PyTorch models. When I try some pretrained models, they do not perform as intended when in ‘.eval()’ mode. I have tested pretrained models from different repositories: MaLP (Github: vishal3477/pro_loc), Stargan (Github: yunjey/stargan) and GDWCT (Github: WonwoongCho/GDWCT).

Unfortunately, all models exhibit unexpected behavior when switched to evaluation mode, and I’m struggling to identify the root cause.

Other users have reported a similar problem in this GitHub issue, but no one has been able to find a fix.

I would greatly appreciate any insights or suggestions.

-–

Python version: 3.7
Pytorch version: 1.13.1