I have a tensor t that I would like to transform. Therefore I made up a for loop that yields the desired result:
t = torch.tensor([[ 1, 2, 3], [ 4, 5, 6], [ 7, 8, 9], [10, 11, 12]])
N, d = t.shape
new = torch.zeros((N, N, 2*d))
for i in range(N):
for j in range(N):
new[i, j, :d] = t[i]
new[i, j, d:] = t[j]
new
>>>
tensor([[[ 1., 2., 3., 1., 2., 3.],
[ 1., 2., 3., 4., 5., 6.],
[ 1., 2., 3., 7., 8., 9.],
[ 1., 2., 3., 10., 11., 12.]],
[[ 4., 5., 6., 1., 2., 3.],
[ 4., 5., 6., 4., 5., 6.],
[ 4., 5., 6., 7., 8., 9.],
[ 4., 5., 6., 10., 11., 12.]],
[[ 7., 8., 9., 1., 2., 3.],
[ 7., 8., 9., 4., 5., 6.],
[ 7., 8., 9., 7., 8., 9.],
[ 7., 8., 9., 10., 11., 12.]],
[[10., 11., 12., 1., 2., 3.],
[10., 11., 12., 4., 5., 6.],
[10., 11., 12., 7., 8., 9.],
[10., 11., 12., 10., 11., 12.]]])
Do the for loops cause any errors with the gradient calculations? Is there any other way that concatenates the tensors in the same way without any for loops?