Detaching gradient

Does index_select and clone detach gradients? I mean during forward prop if you either use index_select (or clone) on a variable, will it create a new variable whose grad doesn’t backprop to the original variable?

Neither of them detaches gradients (well, index_select won’t backpropagate into the indices, but it wouldn’t be possible to do so).

index_select does propagate back:

x = Variable(torch.ones(3), requires_grad=True)
ixs = Variable(torch.LongTensor([1,]))
y = x.index_select(0, ixs)
z = y.mean()
z.backward()
x.grad

> Variable containing:
>  0
>  1
>  0
> [torch.FloatTensor of size 3]

It seems to be the same for clone

x_original = Variable(torch.ones(3), requires_grad=True)
x = x_original.clone()
ixs = Variable(torch.LongTensor([1,]))
y = x.index_select(0, ixs)
z = y.mean()
z.backward()
x_original.grad

> Variable containing:
>  0
>  1
>  0
> [torch.FloatTensor of size 3]

while x.grad is None gives True in this case

1 Like