Autograd happens when changes the data of Variable with `a[:,:,1,1]=2`?

My impression about autograd in pytorch is When you only apply functions that support autograd, then the backward is automatically done with no extra efforts.

a = Variable(torch.randn(4,5,5,5), requires_grad=True)

# Is this ok?
a[:,:,1,1]=2

a[:,:,1,1]=2 is equivalent to a.narrow(2,1,1).narrow(3,1,1)=1.

All the functions that take Variable as input do support autograd.
If you do this, keep in mind that a[:,:,1,1].grad will always be equal to 0. Since it’s value is not used, it’s gradient is 0.

Hi, let me make questions more clear:

  1. As for for-loop , does it supports autograd if it looks like this:
a = Variable(torch.randn(N,C,5,5), requires_grad=True)
conv = torch.nn.Conv2d(C, C, 3, stride=1, padding=1)
conv.weight.data = torch.randn(N, C, 3, 3)
conv.bias.data.fill_(0)
for i in range(5):
    a = conv(a) # if the result overide the original a
 
# if `a=conv(a)` is wrong, how can I do it?

When you do a = conv(a), what happens is that you give the Tensor contained in the python variable a to the convolution, get a new Tensor as output and use the python variable a to refer to it.
This is not an in-place change of a, here you change what the python variable a refers to.