My linear layer's output does not have gradients

I have the following toy code:
testx = torch.tensor([[1.0, 2.0, 3.0]], requires_grad=True)
testw = torch.tensor([[0.1, 0.2, 0.3]], requires_grad=True)
testb = torch.tensor([0.5], requires_grad=True)
testy = F.linear(testx, testw, bias=testb)
print([a.requires_grad for a in [testx, testw, testb, testy]])

When I run it standalone in a new python file it correctly gives me [True, True, True, True] . But when I run exactly these lines in a larger program I get the output [True, True, True, False]

What could cause requires_grad to be set to false for testy in this case and how can I debug it? Is there some context that I need to set for this to work correctly?

Thanks in advance for your help.

You might have disabled gradient computation globally, e.g. via:

torch.set_grad_enabled(False)
testx = torch.tensor([[1.0, 2.0, 3.0]], requires_grad=True)
testw = torch.tensor([[0.1, 0.2, 0.3]], requires_grad=True)
testb = torch.tensor([0.5], requires_grad=True)
testy = F.linear(testx, testw, bias=testb)
print([a.requires_grad for a in [testx, testw, testb, testy]])
# [True, True, True, False]
1 Like