In my project, I need to input multiple translations of the same Tensor to a convolution layer.
A simplified version of what I do is :
im = torch.randint(0, 9, (3, 100, 200))
im_tr = torch.zeros_like(im)
im_tr[:, 50:, 50:] = im[:, :-50, :-50]
im_cat = torch.cat((im, im_tr), dim=0)
im_cat
However, this operation is making a copy of the potentially big Tensor im
. More, I don’t want to input one translation but potentially tens. Values of the translation offset can potentially be big.
Is there a clean way to do that in python-pytorch ?
Do you have any suggestion to avoid exploding the memory while always using the same data ?