Hello guys. I have following parameters from a class I have written:

```
self.m = 4
self.Pnum = 2*self.m -1
self.Nl = 1000
```

In the object’s method i’m doing some linear algebra. I’m doing this on the GPU. I need to make a 4D matrix out of my vector z, which is of length Pnum.

```
z = torch.linspace(0, 2*np.pi, steps = self.Pnum)
```

I transform it into a 4D matrix with a specific shape with:

```
z_tile = z.repeat( self.Nl * self.m, self.m).reshape( (self.Nl, self.m, self.m, self.Pnum))
```

This operation seems to be taking a lot of memory, if I have bigger values of self.Nl ( which is an input ). Is there a way to rewrite this z_tile tensor with views and expand functions in a way which saves some GPU memory?