No you cannot, but you can return apply your hook to just part of the gradient:
def hook_fn(grad):
new_grad = grad.clone() # remember that hooks should not modify their argument
new_grad[0].zero_()
return new_grad
h = model.classifier.weight.register_hook(hook_fn)
Hi @albanD, I have seen your answers and wonder if there is any faster option of modifying the gradients. I tried to implement your code, but when using Hooks, my GPU is stopped, and the time to do training or de-inference is much slower than without Hooks. Is there any way of paralysing the code assuming that I want to check all elements within the gradient tensor?