When set requires_grad() False, tensor grad is None

Hi, I have a problem about autograd. The code is below:

from torch import nn
input = torch.randn(8, 3, 50, 100)

net = nn.Sequential(nn.Conv2d(3, 16, 3, 1),nn.Conv2d(16, 32, 3, 1))

net.named_parameters().__next__()[1].requires_grad = False

for param in net.named_parameters():
    print(param[0], param[1].requires_grad)

output = net(input)

net.named_parameters().__next__()[1].requires_grad = True

for param in net.named_parameters():
    param[0], param[1].grad

then part of output is listed

0.weight False 
0.bias True 
1.weight True 
1.bias True
('0.weight', None)

I set requires_grad() False for 0.weight, then I set requires_grad() True before backward. But I still get None for 0.weight grad. I’m wondering what step in forward cause this problem.

Since you are resetting the requires_grad attribute to True after the forward pass, so after the computation graph was already created, you most likely won’t get the gradient calculation for this particular parameter anymore.
Try to reset it before the forward pass and check the grad again.

Thanks. But I’m still curious about what in computation graph cause this problem. If I set the requires_grad to False for this tensor, will it not store the intermediate calculation result?

Yes, if some parameters and thus operations don’t need gradients, the intermediate tensors won’t be stored. The same would happen, if you wrap the code in a with torch.no_grad() block to save memory during the validation pass.