Change the dimension of tensor

Hi,

I have a tensor with dimension [1, 1, 4, 6] like this:

a = torch.tensor([[[ 1,  2,  3,  4,  5,  6],
                   [ 7,  8,  9, 10, 11, 12],
                   [13, 14, 15, 16, 17, 18],
                   [19, 20, 21, 22, 23, 24]]])

I want to change it to a tensor like this:

[[ [[1, 2],
    [7, 8]],

   [[3, 4],
    [9, 10]],

   [[5, 6],
    [11, 12]],

   [[13, 14],
    [19, 20]],

   [[15, 16],
    [21, 22]],

   [[17, 18],
    [23, 24]] ]]
   

Is it possible?

if we use this code:

print(a.view(1, 6, 2, 2))

the result would be as bellow that is not what I need:

tensor([[[[ 1,  2],
          [ 3,  4]],

         [[ 5,  6],
          [ 7,  8]],

         [[ 9, 10],
          [11, 12]],

         [[13, 14],
          [15, 16]],

         [[17, 18],
          [19, 20]],

         [[21, 22],
          [23, 24]]]])

Is there any idea to have such result?

Thanks

Hi,

It worked for me; I hope it helps:

a = torch.tensor([[[ 1,  2,  3,  4,  5,  6],
                   [ 7,  8,  9, 10, 11, 12],
                   [13, 14, 15, 16, 17, 18],
                   [19, 20, 21, 22, 23, 24]]])
z= a.unsqueeze(0).unfold(2, 2, 2)[0].unfold(2, 2, 2).contiguous().view(1, 6, 2, 2)

Bests

1 Like

Thanks, Would you please explain about the code?

Best Regards

Yes, sure,

First, the tensor a your provided has size [1, 4, 6] so unsqueeze(0) will add a dimension to tensor so we have now [1, 1, 4, 6].
.unfold(dim, size, stride) will extract patches regarding the sizes. So first unfold will convert a to a tensor with size [1, 1, 2, 6, 2] and it means our unfold function extracted two 6x2 patches regarding the dimension with value 4. Then we just discard first redundant dimension created by unfold using [0]. And finally, second unfold will extract 2x2 patches regarding the dimension with value 6. So for sure it will create 3 patches out of a 6x2 tensor (3x2x2=6x2).
Now we have something like [1, 2, 3, 2, 2] and .contiguous().view(1, 6, 2, 2) will reshape our tensor to the desired one.

3 Likes

Could you please remove autograd tag from the question?

autograd questions are about the autograd engine itself and it’s semantic not functions about tensors and manipulation of them.

Thanks

Yes, of course. Thanks for your reminder.

Thanks for comprehensive explanation.
I want do the same task in batch mode; suppose I have a tensor like this:

a = torch.tensor([[[[ 1,  2,  3,  4,  5,  6],
                   [ 7,  8,  9, 10, 11, 12],
                   [13, 14, 15, 16, 17, 18],
                   [19, 20, 21, 22, 23, 24]]],
                  
                 [[[25,  26,  27, 28, 29, 30],
                   [31,  32,  33, 34, 35, 36],
                   [37, 38, 39, 37, 38, 39],
                   [40, 41, 42, 43, 44, 45]]]])

I want to have sth like this as a result:

tensor([[[[ 1,  2],
          [ 3,  4]],

         [[ 5,  6],
          [ 7,  8]],

         [[ 9, 10],
          [11, 12]],

         [[13, 14],
          [15, 16]],

         [[17, 18],
          [19, 20]],

         [[21, 22],
          [23, 24]]],
------------------------------
         [[[25, 26],
          [27, 28]],

         [[29, 30],
          [31, 32]],

         [[33, 34],
          [35, 36]],

         [[37, 38],
          [39, 37]],

         [[38, 39],
          [40, 41]],

         [[42, 43],
          [44, 45]]]])

when I use this code:

z= a.unfold(2, 2, 2)[0].unfold(2, 2, 2).contiguous().view(1, 6, 2, 2)

Just the first matrix of tensor would be affected. Would you please guide me about it, too?

Thanks

Hi, you’re welcome.

The code is actually same, because for first answer, I considered your tensor as a batch of tensor which only have ONE tensor by using squeeeze(0). So still same code will works:

a.unsqueeze(0).unfold(2, 2, 2)[0].unfold(2, 2, 2).contiguous().view(2, 6, 2, 2)

If I want to explain it in precise manner, extracting patches is all about playing with number of unfold() you consecutively use and the arguments of it which are dim, size and stride. The most important one is dim because it extract patches regarding that dimension.

PS.: I just changed first arg of view from 1 to 2.

best regards
Nik

Hi, thanks for your favor.

I tested what you said as below:

import torch
a = torch.tensor([[[[ 1,  2,  3,  4,  5,  6],
                   [ 7,  8,  9, 10, 11, 12],
                   [13, 14, 15, 16, 17, 18],
                   [19, 20, 21, 22, 23, 24]]],
                  
                 [[[25,  26,  27, 28, 29, 30],
                   [31,  32,  33, 34, 35, 36],
                   [37, 38, 39, 37, 38, 39],
                   [40, 41, 42, 43, 44, 45]]]])

z= a.unfold(2, 2, 2)[0].unfold(2, 2, 2).contiguous().view(2, 6, 2, 2)
print(z)

but I received this error:

RuntimeError                              Traceback (most recent call last)
<ipython-input-6-de88f8290113> in <module>()
     10                    [40, 41, 42, 43, 44, 45]]]])
     11 
---> 12 z= a.unfold(2, 2, 2)[0].unfold(2, 2, 2).contiguous().view(2, 6, 2, 2)
     13 print(z)

RuntimeError: shape '[2, 6, 2, 2]' is invalid for input of size 24

Would you please look at it again?

Many thanks

add unsqueeze(0) at first. I have considered your tensors as images.

import torch
a = torch.tensor([[[[ 1,  2,  3,  4,  5,  6],
                   [ 7,  8,  9, 10, 11, 12],
                   [13, 14, 15, 16, 17, 18],
                   [19, 20, 21, 22, 23, 24]]],
                  
                 [[[25,  26,  27, 28, 29, 30],
                   [31,  32,  33, 34, 35, 36],
                   [37, 38, 39, 37, 38, 39],
                   [40, 41, 42, 43, 44, 45]]]])

z= a.unsqueeze(0).unfold(2, 2, 2)[0].unfold(2, 2, 2).contiguous().view(2, 6, 2, 2)
print(z)

I received this error:

RuntimeError                              Traceback (most recent call last)
<ipython-input-9-9e80a02d4148> in <module>()
     10                    [40, 41, 42, 43, 44, 45]]]])
     11 
---> 12 z= a.unsqueeze(0).unfold(2, 2, 2)[0].unfold(2, 2, 2).contiguous().view(2, 6, 2, 2)
     13 print(z)

RuntimeError: invalid argument 3: out of range at ..\aten\src\TH/generic/THTensor.cpp:392

Please attention that the dimension of tensor is torch.Size([2, 1, 4, 6]).
I would appreciate it if you check it again.

Thanks a lot

Ow, sorry.
I used the tensor in the first post.


a.unfold(2, 2,2).unfold(3, 2,2).contiguous().view(2, 6, 2, 2)

by the way, as I told you only need to work with dim and number of unfold functions. Easy does it.

Bests
Nik

Thanks Nik, it could help me a lot.
Best Regards

Hi there, I was following up on the topic and could transform my tensor to [8, 1024, 169], however I’d need to transform it to [8, 1024, 13, 13].

Does anyone have an idea? Thank you very much in advance!

If your tensor already has the shape [8, 1024, 169], you could use the view operations as: x.view(8, 1024, 13, 13) to create the desired shape.

oh thanks a lot for the solution!

Sorry for another question about chaging my tensor dimensions…I have another tensor, which I was able to transform to torch.Size([16, 1024, 9, 9]), but similar to the other I would need it to be transformed to torch.Size([16, 1024, 13, 13]). I tried some but haven’t found the solution yet :slight_smile: And thank you again!

You won’t be able to reshape/view the input tensor of [16, 1024, 9, 9] into the shape [16, 1024, 13, 13], since the latter contains more elements.
However, you could use an interpolation method via e.g. nn.Upsample or F.interpolate or alternatively e.g. transposed convolution layers.
Let me know, if one of these approaches would work for you.

1 Like