How can I get the sum of gradients immediately after loss.backward()?

I am trying to do some importance sampling experiments:
During an evaluation epoch, I calculate the loss for each training sample, and obtain the sum of gradients for this training sample. Finally, I will sort the training samples based on gradients they introduced. For example, if sample A shows a very high gradient sum, it must be an important sample to training. Otherwise, it is not a very important sample.

Note that, the gradients calculated here will not be used to update parameters. In other words, they are only used for selecting importance samples.

I know gradients will be ready somewhere after loss.backward(). But what is the easiest way to grab the summed gradients over the entire model? In my current implementation, I am only allowed to modify one small module with only loss availble, so I don’t have “inputs” or “model”. Is it possible to get the gradients from only “loss”?

It’s easy to get the sum of loss with a model

# make gradients ready
loss.backward()
grad_sum = sum(param.grad.sum() for param in model.parameters() if param.grad is not None)

What do you mean

so I don’t have “inputs” or “model”

Where is your learnable parameters?

Thanks. I am working on some existing codebase, but I am only allowed to modify the evaluate_metric() module. It has GT and Prediction as inputs, but I don’t have the access to the network or the input data of the network. Therefore, I cannot do “model.parameters()” at this point, because I don’t have “model” passed to this evaluate_metric() module. For “inputs”, I meant to say the data (i.e. RGB image tensor), which is not accessible inside evaluate_metric(), either.

Sorry for the late reply. I don’t know any approach that can sum up the gradients through only a function interface. Maybe you should ask the owner of the NN for model access.

1 Like