Hello,
I am breaking a tensor into chunks so that I can individually change the requires_grad argument based on which values I want to optimize

ones = torch.ones(3)
chunks = torch.chunk(ones, 3, 0)
# using chunks as input to optimizer
k = torch.cat(chunks)
print(k)
Output: tensor([1., 1., 1.])
k[0]=2
print(k)
Output: tensor([2., 1., 1.])
print(chunks)
Output: (tensor([1.]), tensor([1.]), tensor([1.]))

I am doing this so that I don’t have to change most of my code, so I am first using the chunk function to break the tensor and then using cat function. Changing the values of variable ‘k’ doesn’t seem to reflect on variable ‘chunks’. I don’t understand where the flaw in my intuition is.

torch.chunk creates views of the original ones tensor. torch.cat does not create a view; it creates a branch new tensor by copying data. This is why modifying the output of torch.cat (k) doesn’t change chunks; their data storage is completely unrelated.