Different backpropagation gradients in googlenet

I imported googlenet weights from caffe to pytorch. the model has a reasonable accuracy. However for the same image the backpropagation gradients change in different runs. (Even after dropout turned off ) is this an instability in PyTorch ?

How did you compare the gradients?
Did you zero them after each run?

Sorry I didn’t directly visualize gradients before, I used them in another system and check the output, Today I tried to compare sums of gradients. seems to be stable even with dropout layers (Surprised) , Seems to be a false alarm. Thank you very much for your help and really sorry for the false alarm.
Any how is there any possibility the positions of the gradients change other than dropout ? (LRN2D) etc . I was trying to train a mask to change to original image until the activation become zero using Adam optimizer but system tend to change the results when repeated (the visual results) . this is why I asked the question. is it optimizer unstability ? (Yes I did zero the gradients :slight_smile: )