Modifing gradients between backeard and optimization step procedure

Hello

I have seen a couple of previous topics regarding observing or modifying the gradients of non-leaf tensors. some of them are for the year 2018 or 2019 which suggest function .grad. However this function returns None. And through the autograd doc. I have seen much explanation about the required_grad function.
My question is that is it possible to see the values of the gradients of the tensor in computational graph and is it possible to modify the gradients after loss.backward() and before optimizer.step() ?

Thanks.

@hamedB you need to register hooks on the intermediate variables to get the grad

https://pytorch.org/docs/stable/generated/torch.Tensor.register_hook.html

Thanks @anantguptadbl for your reply. I am familiar with the hook. But I am not sure how to apply the modifications on the actual tensor. Hook can records the values on an external list. However I want after computing the gradients by loss.backward(), add some values to each gradients computed and then call optimizer.step()
if you have any idea, I appreciate it.

Thanks