Edit huge tensor piece-wise by using references

Question: How can I assign values to tensor p, while not loosing the shared storage with t, such that every change on p will be visible in t? Note that I need to avoid element-wise iteration.

Context:
I have a huge tensor t with dim n x m. And another huge tensor v with dim l x k, containing information that has to be parsed somehow into t.
During a loop, I always get a piece of t out. I call it p. Similar to the following code (assuming that the array pos is given):

for i in range(0, len(pos)-1):
   p = t.narrow(0, pos[i], pos[i+1])
   # now assign values to p that come from v by some parsing logic.

Since t and p share the same underlying storage:
If I assign values to p these should be also assined to t. (Which in fact works, if I manually do p[0][0]=1, for example.)
Due to efficiency problems I cannot loop through every element of t by using indices.

Since my parsing logic is encoded in an index array, I thought that torch.gather would perfectly fit my needs.
Unfortunately the returned tensor is a new copy of the data. Such that p is assigned correctly to the values but t is not (since the reference got lost).

Something like:
p = v[index].clone().detach()
Did not work as well.

Note: I cannot operate on t directly due to some program logic.

Hi, perhaps you’re looking for p.copy_ method

Thank you. This works as I requested.
Unfortunately the Backwards pass does not work anymore:

RuntimeError: leaf variable has been moved into the graph interior

Do you have a suggestion on how to overcome this issue?

Ouch, supporting backprop through such a “buffer” is tough, I don’t think pytorch has a solution for that. Maybe writing your “gatherings” inside autograd.Function could work, then in backward function you’d have to scatter gradients to original positions in v (torch.gather does backward with scatter_add_, but your case is more complex, so you’d have to figure how to map indexes).