Write to .data instead? [RuntimeError: a leaf Variable that requires grad has been used in an in-place operation.]

Hi,

I am optimizing a tensor called z in my training loop. z is a leaf tensor. I want to do some experiments how my model behaves when each epoch a little noise is added to z after the gradients were backpropagated.

When I do this

my_adam = optim.Adam([z], lr=self.lr_shape)

# and this comes in the training loop:
loss.backward()
my_adam.step()
z[some_indices] = z[some_indices] + my_noise_tensor

I get

RuntimeError: a leaf Variable that requires grad has been used in an in-place operation.

However, when I do this instead

z.data[some_indices] = z[some_indices] + my_noise_tensor

it works without an error. Is this a safe way to do it or can this cause some problems with autograd or other problems that I might be missing? I mean is it safe to write to .data directly?

What is the recommended way to modify a leaf variable manually in the training loop?

1 Like

Yeah, that’s fine here. The new style to express this is to use torch.no_grad() to signify that you don’t want to track the gradient of this operation (but z.data is OK too):

with torch.no_grad():
  z[some_indices] += my_noise_tensor
3 Likes