# Tensor index to index copy

Suppose I have 2 3-D tensors `A`, and `B` and want to copy some elements from `B` into `A`. Specifically, I have two lists of the form `[(x_1, y_1), (x_2, y_2), ...]` and `[(x'_1, y'_1), (x'_2, y'_2), ...]` and I want to perform `A[x_1, y_1, :] = B[x'_1, y'_1, :]` and so on. Is there any fast way of doing this or is a for-loop the only way?

Also, will such an operation support the flow of gradients from `A` to elements copied from `B`?

Thanks!

Hi,

Here is an example implementation.
Gradients will flow as expected for the copied values from A to B (not gradient for indices of course).
Note that inplace operations sometimes prevent the autograd from being able to compute gradients so there is a flag in case you encounter this error.
Let me know if you have more questions.

``````import torch

A = torch.zeros(3, 4, 6)
B = torch.rand(3, 4, 6)

indA = torch.LongTensor([[0, 0], [0, 1], [1, 1]])
indB = torch.LongTensor([[1, 1], [2, 1], [2, 2]])

def indices_copy(A, B, indA, indB, inplace=True):
# To make sure our views below are valid
assert A.is_contiguous()
assert B.is_contiguous()

# Get the size
size = A.size()

# Collapse the first two dimensions, so that we index only one
vA = A.view(size * size, size)
vB = B.view(size * size, size)

# If we need out of place, clone to get a tensor backed by new memory
if not inplace:
vA = vA.clone()

# Transform the 2D indices into 1D indices in our collapsed dimension
lin_indA = indA.select(1, 0) * size + indA.select(1, 1)
lin_indB = indB.select(1, 0) * size + indB.select(1, 1)

# Read B and write in A
vA.index_copy_(0, lin_indA, vB.index_select(0, lin_indB))

return vA.view(size)

print("Inputs")
print(A)
print(B)
print(indA)
print(indB)

indices_copy(A, B, indA, indB)

print("Output inplace")
print(A)

A = torch.zeros(3, 4, 6)
new_A = indices_copy(A, B, indA, indB, inplace=False)

print("Output out of place")
print(new_A)
print("Unmodified A")
print(A)
``````

Awesome, this is exactly what I needed.
Thanks a lot!