What could be the reason for Model to lose all progress when put into .eval() mode?


I trained a GAN as my first PyTorch project and it worked really nicely.
However, whenever I put the generator into .eval() mode under a torch.no_grad() wrapper to get some outputs I just get a uniform output like in an untrained model.

When I do the same without calling generator .eval() first it works fine.

I thought .eval() just makes sure, dropout, batch norm, etc are all in evaluation mode, so I don’t see how it would affect it other than I misunderstood what .eval() does.

Thanks in advance for any tips and thoughts of things I could try.
It’s no biggie other than that I want to understand pytorch better.

Yes, your assumption is right and calling model.eval() will set all modules into evaluation mode. E.g. batchnorm layers will use their running stats to normalize the input activations instead of the current batch stats. Depending on how well these running stats were fit to your dataset (and assuming they can even converge towards the “dataset mean and var”) this approach could work well or fail (as is seen in other posts here).

Oh, thank you so much! So, I’m guessing what I’m seeing is the mean of the training data somewhat. Hmmm. Interesting! Thanks again!