How to set requires_grad

a=torch.ones(3,3)
print(a.requires_grad) #False

print(a[0][0].requires_grad) #False

a[0][0].requires_grad=True

print(a[0][0].requires_grad) #False

I want to know why?How to set the gradient of part parameters inside a nucleus?

2 Likes

Hi,

requires_grad is a field on the whole Tensor, you cannot do it only on a subset of it.
You will need to do a.requires_grad=True and then extract the part of the gradient of interest after computing all of it: a.grad[0][0].

4 Likes

Thanks,
But for a convolution kernel, for example, I want to specify the parameters of the diagonal to make it fixed, and learn the parameters of other locations.So I want to set the diagonal parameter to requires_grad = False,but this is not possible,is there any other way to do this?

Hi,

I’m afraid it’s not possible.
If you don’t use regularization, one way would be to set the gradients for the diagonal elements to 0.
But if you have regularization they will still change.

Otherwise, you can save your parameters diagonal manually before training and then re-modify the weights after your optimizer step to the original value.

1 Like

Thank you very much ,I will try

How to you extract the gradient of interest exactly?
Thank you.

Hi,

As mentionned in the post a.grad[0][0] will give you the right subset of the gradient.