Since a.data = b.detach()
is not recommended( is it right?), and a.fill_(v)
can only cope with 0-dim value, so what’s the recommended value to update one tensor value?
My typical case is to write gradients
into tensor.grad
after using autograd.grad()
.
Hi,
If you want to change the value of a tensor that requires gradients without it being tracked by the autograd engine, you should do:
with torch.no_grad():
a.copy_(b)
As is done for weight initialization for example in the nn.init
package.
If thinks don’t require gradients and have no grad_fn
associated with it, you can simply do:
a.copy_(b)