Pre-trained googlenet performance dropped when the input image was normalized with imagenet statistics

Hi,
I have used the GoogleNet pre-trained model and normalized the input image by the Imagenet statistics, but significantly the performance dropped. But when I used the input tensor in the range of [0,1], I saw that the model worked better. Did anyone have any experience with this issue?

1 Like

What kind of data are you using at the moment and what accuracies do you get using both approaches?

Thank you very much for your quick reply. Let me explain my task for you. I worked on Imagenet validation dataset to train an adversarial perturbation generator in which a freezed googlenet model was used.

I saw this line of code and I inferred that when we set pretrained=True in Googlenet constructor, the forward function normalized the input image differently (seems it works with different statistics).

About the accuracy, I did not directly calculate it but in terms of fooling rate in adversarial attack task which I achieved, I saw that the input image with the range [0,1] had far better performance.