Translate a Tensor without copy


In my project, I need to input multiple translations of the same Tensor to a convolution layer.

A simplified version of what I do is :

im = torch.randint(0, 9, (3, 100, 200))
im_tr = torch.zeros_like(im)
im_tr[:, 50:, 50:] = im[:, :-50, :-50]
im_cat =, im_tr), dim=0)

However, this operation is making a copy of the potentially big Tensor im. More, I don’t want to input one translation but potentially tens. Values of the translation offset can potentially be big.

Is there a clean way to do that in python-pytorch ?
Do you have any suggestion to avoid exploding the memory while always using the same data ?

In the limited translations, you can regard multiple translations as the original tensor convolved with gaussian kernel. The specific formulas you can formulate yourself.