Why are the gradients not disabled when decorating the function

I have a function in which prompting the require_grad attribute for a tensor that has requires_grad = True set, correctly yields False because the function itself is decorated with @torch.no_grad(). Turning of the decoration results in True.

Trying to reproduce this behaviour with the following snippet

x = nn.Parameter(torch.tensor(1.))

def foo(t):


gives True both times when I expected the second print statement to output False.

Am I missing something here?


The decorator won’t change the requires_grad property on existing Tensor. The only thing it does is disable tracking of gradient computations within that block.
In particular, that means that if you do any op in your function: t2 = t + 1, then t2.requires_grad=False because that op was not tracked and so the output does not require gradients.

thank you, yes that is clear. However, I experienced the case where requires_grad would actually change from printing True to False if output within the function decorated like above which seems strange to me. As soon as I put torch.enable_grad() as a decorator, requires_grad output True as it was meant to. Thats why I figured that the decorater itseld causes this output to be different oO