How to concatenate a tensor WITHIN axis=1?

I have a tensor of shape (2,2,2,2):

tensor([[[[   5.,    5.],
          [   5.,    5.]],

         [[  10.,   10.],
          [  10.,   10.]]],


        [[[ 100.,  100.],
          [ 100.,  100.]],

         [[1000., 1000.],
          [1000., 1000.]]]], device='cuda:0')

I want to transform it such that the tensor along axis=1 are repeated 3 times. And after applying .view(-1) to that I get a 1D resultant tensor as:

tensor([   5.,    5.,    5.,    5.,   5.,    5.,    5.,    5.,   5.,    5.,    5.,    5.,   10.,   10.,   10.,   10.,   10.,   10.,   10.,   10.,   10.,   10.,   10.,   10.,  100.,  100.,  100.,  100.,  100.,  100., 100.,  100.,  100.,  100., 100.,  100.,  100.,  100.,  100.,  100., 1000., 1000., 1000., 1000.  1000., 1000., 1000., 1000. 1000., 1000., 1000., 1000. 1000., 1000., 1000., 1000.], device='cuda:0')

How to do this?

This should work:

x = x.repeat(1, 1, 3, 1)
print(x.view(-1))
> tensor([   5.,    5.,    5.,    5.,    5.,    5.,    5.,    5.,    5.,    5.,
           5.,    5.,   10.,   10.,   10.,   10.,   10.,   10.,   10.,   10.,
          10.,   10.,   10.,   10.,  100.,  100.,  100.,  100.,  100.,  100.,
         100.,  100.,  100.,  100.,  100.,  100., 1000., 1000., 1000., 1000.,
        1000., 1000., 1000., 1000., 1000., 1000., 1000., 1000.])
1 Like

Thank you so much. Your solution works fine but for a larger tensor such as (50,512,224,224) my system crashes(All available ram used). Is there a time efficient work around that?

I don’t think so.
Even if you could create a tensor containing single elements so that you could use expand, you would still have to call contiguous() before the view() call, which would increase the memory again:

x = x.expand(-1, -1, 6, 2)
print(x.contiguous().view(-1))

RuntimeError: The expanded size of the tensor (6) must match the existing size (2) at non-singleton dimension 2. Target sizes: [-1, -1, 6, 2]. Tensor sizes: [2, 2, 2, 2]
getting this error when using x = x.expand(-1, -1, 6, 2)

As explained, for this you would need to create a tensor containing a single element for each value.
But even then, the contiguous call will use the memory.

1 Like