About tensor defined inside a function

Will the autograd system remember the tensor w in the function test ?

def test(x):
    w=torch.rand(size(x), dtype=x.dtype)
    return y

If not, w will be garbage-collected after the function returns ?


The autograd will make sure to keep around everything that it needs for the backward. You don’t need to worry about it.