Why does gradients change every time?


#1

Dear all,

I have a trained model, and I’m trying to retrieval gradients of the output w.r.t some new input. I set the input.requires_grad = True in advance. Now I’m using autograd.grad(output, input, retain_graph=True) to get gradients. However, with the same code, if I ran multiple times, I get slightly different result every time. And after a certain number of runs, the result stopped changing. Why is this happening? Am I doing something wrong? Thank you!


#2

As far as I understand you are not updating any weights, just computing the gradients w.r.t. the input?
Are you using BatchNorm layers? They might be updated in the first few iterations and then converge to the input mean and std.


#3

Yes I’m just computing gradients. There is no updating. I don’t have BatchNorm layers. The pipeline is pretty straightforward. Since the model is trained on minibatch of 25, I first copy the new input 25 times and create a batch matrix, then in the output, I only extract the first data in the batch.