Understanding set_requires_grad

I have this tensor

x_tensor = torch::empty({size, 784});

And I am using set_requires_grad() before training

x_tensor.set_requires_grad(true);

The function set_requires_grad() returns a Tensor.
It it the same x_tensor? Should I capture it? Should I use it? Is set_requires_grad() changing x_tensor in place?

You would need to assign and use the returned tensor. Alternatively, you can use the inplace operation via .requires_grad_() (note the trailing underscore).

I’ll probably use inplace requires_grad_() then, thanks.
But just to be clear, with set_requires_grad(), do I replace x_tensor?

x_tensor = x_tensor.set_requires_grad(true);

Yes, your approach with reassigning the tensor works too.

1 Like