[Resolved] Loss varies even when network is in evaluation mode for the same model

I was trouble shooting my code when i came across a somewhat weird phenomenon. I’m using a smaller AlexNet model with batchnorm and dropout removed at present ( i wanted to make sure it wasn’t these two causing the problems). I switched the model to eval mode, do not do any gradient steps and calculate the MSE loss for a randomly initialized model on a single image (there is no point to this, i just wanted to make sure the output was constant). In principle the loss should stay constant but it seems to be alternating between two values. Is there any explanation for what might be going on? I doubt this is the reason for problems in my experiments but it would be nice to know why this is happening.

[0][1/1] Loss: [0.518107] Time batch: [0.000004]
[1][1/1] Loss: [0.517437] Time batch: [0.000005]
[2][1/1] Loss: [0.518107] Time batch: [0.000003]
[3][1/1] Loss: [0.517437] Time batch: [0.000002]
[4][1/1] Loss: [0.518107] Time batch: [0.000003]
[5][1/1] Loss: [0.517437] Time batch: [0.000002]
[6][1/1] Loss: [0.518107] Time batch: [0.000004]
[7][1/1] Loss: [0.517437] Time batch: [0.000004]
[8][1/1] Loss: [0.518107] Time batch: [0.000004]
[9][1/1] Loss: [0.517437] Time batch: [0.000002]

Ok I figured it out. Not a problem with anything in the framework. I had a random transformation in the create transform. One output corresponds to the flipped image and the other to the original.

Can someone please delete this. Thank you.