Grad lost after CopySlices of a tensor

For the following simple code, with pytorch==1.9.1, python==3.9.13 vs pytorch==1.11.0, python==3.10.4 , The result is totally different. In the newer version of pytorch, the grad is lost.

import torch
S = torch.zeros(1,4)
a = torch.tensor(1.,requires_grad=True)
S[0,2:4] = a
print(S)

pytorch==1.9.1, python==3.9.13 gives:

tensor([[0., 0., 1., 1.]], grad_fn=<CopySlices>)

but pytorch==1.11.0, python==3.10.4 gives:

tensor([[0., 0., 1., 1.]])

Here you can see the grad is lost.

In my code base, I have a lot of implementations like this. I really want to know why this happens and then understand how I can solve this problem without massively refactoring my code base to make it work in the newer pytorch version.

Hi Ciacc!

This appears to be a known bug / regression that has recently been fixed.
It works for me in a version-1.13 nightly build:

>>> import torch
>>> torch.__version__
'1.13.0.dev20220604'
>>> S = torch.zeros(1,4)
>>> a = torch.tensor(1.,requires_grad=True)
>>> S[0,2:4] = a
>>> print(S)
tensor([[0., 0., 1., 1.]], grad_fn=<CopySlices>)

(I can reproduce your issue in 1.11.)

While waiting for the fix to hit an updated stable release, I would recommend
downgrading back to version 1.9, or, if you’re comfortable working with an
“unstable” release, upgrade to the nightly. (Personally, I would go with the
nightly, but not for important production purposes.)

See this github issue:

Best.

K. Frank

Hi K. Frank,

Thanks so much for the reply. I thought it was an intended change. Now at least I don’t have to worry about refactoring my code :smiley:

Trying out nightly sounds cool. I will give it a roll.