How to perform indexing with autograd?

I need to set all the values which are greater than 0 to those values itself and all the values less than zeros to exp(a*x), where a is a constant. I tried as follows:

   x[x<=0] = torch.exp(a*x[x<=0])

Is it correct?
I am getting a run time error: a leaf variable that requires grad has been used in an in-place operation.
How to overcome this error?

The instruction seems right; and it does what you want.
However, in term of autograd, Pytorch does not like it when performing in-place operations on leaf-variable that require grad.

Here some code, in Pytroch 1.0.0, Python 3.7.0, to reproduce your error:

import torch
from torch.distributions import normal

# Reproducibility
torch.manual_seed(0)

# Create data
dist = normal.Normal(0., 1.)
y = dist.sample((5, 2))

# This is your leaf-variable.
x = y.data
# Make it requires grad.
x.requires_grad = True

print("X BEFORE:\n {}".format(x))
# Make it non-leaf-variable!!!
# x = x + 0.  # if you remove this you will obtain the error: x[x <= 0] = torch.exp(a*x[x <= 0])
# RuntimeError: a leaf Variable that requires grad has been used in an in-place operation.

a = 10.
x[x <= 0] = torch.exp(a*x[x <= 0])

print("X AFTER:\n {}".format(x))

Running the above code will generate the error:

Traceback (most recent call last):
  File "xxxxx.py", line 21, in <module>
    x[x <= 0] = torch.exp(a*x[x <= 0])
RuntimeError: a leaf Variable that requires grad has been used in an in-place operation.

When making x non-leaf variable, autograd works:

X BEFORE: 
tensor([[ 1.5410, -0.2934],
        [-2.1788,  0.5684],
        [-1.0845, -1.3986],
        [ 0.4033,  0.8380],
        [-0.7193, -0.4033]], requires_grad=True)
X AFTER:
 tensor([[1.5410e+00, 5.3169e-02],
        [3.4486e-10, 5.6843e-01],
        [1.9498e-05, 8.4329e-07],
        [4.0335e-01, 8.3803e-01],
        [7.5215e-04, 1.7713e-02]], grad_fn=<IndexPutBackward>)

The trick here is to apply a non-in-place operation over a leaf variable to make it non-leaf variable. In this example, we added 0. to x. Probably, there are better and canonical ways to do it in Pytorch.

Setting x.requires_grad = False works fine, without the need to make the leaf variable non-leaf:

import torch
from torch.distributions import normal

# Reproducibility
torch.manual_seed(0)

# Create data
dist = normal.Normal(0., 1.)
y = dist.sample((5, 2))

# This is your leaf-variable.
x = y.data
# Make it requires grad.
x.requires_grad = False

print("X BEFORE:\n {}".format(x))
# Make it non-leaf-variable!!!
# x = x + 0.  # if you remove this you will obtain the error: x[x <= 0] = torch.exp(a*x[x <= 0])
# RuntimeError: a leaf Variable that requires grad has been used in an in-place operation.

a = 10.
x[x <= 0] = torch.exp(a*x[x <= 0])

print("X AFTER:\n {}".format(x))

Running gives:

X BEFORE:
 tensor([[ 1.5410, -0.2934],
        [-2.1788,  0.5684],
        [-1.0845, -1.3986],
        [ 0.4033,  0.8380],
        [-0.7193, -0.4033]])
X AFTER:
 tensor([[1.5410e+00, 5.3169e-02],
        [3.4486e-10, 5.6843e-01],
        [1.9498e-05, 8.4329e-07],
        [4.0335e-01, 8.3803e-01],
        [7.5215e-04, 1.7713e-02]])

I hope this helps.

1 Like