Hello,

I am breaking a tensor into chunks so that I can individually change the requires_grad argument based on which values I want to optimize

```
ones = torch.ones(3)
chunks = torch.chunk(ones, 3, 0)
# using chunks as input to optimizer
k = torch.cat(chunks)
print(k)
Output: tensor([1., 1., 1.])
k[0]=2
print(k)
Output: tensor([2., 1., 1.])
print(chunks)
Output: (tensor([1.]), tensor([1.]), tensor([1.]))
```

I am doing this so that I don’t have to change most of my code, so I am first using the chunk function to break the tensor and then using cat function. Changing the values of variable ‘k’ doesn’t seem to reflect on variable ‘chunks’. I don’t understand where the flaw in my intuition is.

Thank you.