Reshaping 1D vector in a 4D tensor without allocating huge amounts of GPU memory

Hello guys. I have following parameters from a class I have written:

self.m = 4
self.Pnum = 2*self.m -1 
self.Nl = 1000

In the object’s method i’m doing some linear algebra. I’m doing this on the GPU. I need to make a 4D matrix out of my vector z, which is of length Pnum.

z = torch.linspace(0, 2*np.pi, steps = self.Pnum)

I transform it into a 4D matrix with a specific shape with:

z_tile = z.repeat( self.Nl * self.m, self.m).reshape( (self.Nl, self.m, self.m, self.Pnum))

This operation seems to be taking a lot of memory, if I have bigger values of self.Nl ( which is an input ). Is there a way to rewrite this z_tile tensor with views and expand functions in a way which saves some GPU memory?

Hi,
rapeat makes new copies of the tensor each time. This is expected (depending on the behaviour you expect for your gradients)

You can alternatively use expand
https://pytorch.org/docs/stable/tensors.html?highlight=expand_as#torch.Tensor.expand
Please read the docs in order to be aware of the differences.

From docs:

More than one element of an expanded tensor may refer to a single memory location. As a result, in-place operations (especially ones that are vectorized) may result in incorrect behavior. If you need to write to the tensors, please clone them first.