Will the autograd system remember the tensor w
in the function test ?
def test(x):
w=torch.rand(size(x), dtype=x.dtype)
y=w*x
return y
If not, w
will be garbage-collected after the function returns ?
Will the autograd system remember the tensor w
in the function test ?
def test(x):
w=torch.rand(size(x), dtype=x.dtype)
y=w*x
return y
If not, w
will be garbage-collected after the function returns ?
Hi,
The autograd will make sure to keep around everything that it needs for the backward. You don’t need to worry about it.