I’m getting different values out of the pretrained Resnet across epochs. This is how I’m initialising the CNN,
self.cnn = models.resnet101(pretrained=True)
self.trimmed_cnn = nn.Sequential(*list(self.cnn.children())[:-2]) # take off the FC and mean pool layers
for param in self.trimmed_cnn.parameters():
param.requires_grad = False
When I evaluate
img_features = self.trimmed_cnn(img) on epoch 1 of training, it’s different than when I do the same evaluation on epoch 3. I’ve made sure the input
img is the same in both cases. I know that convolution layers on GPU are non-deterministic, but the output I’m getting is significantly different.
On epoch 1, if I evaluate
img_features = self.trimmed_cnn(img) twice, they give the same result. This makes me think that somehow the weights of the CNN are being modified despite setting
param.requires_grad = False?