Pytorch repeat 3rd dimension

I’m following this example on doc

In [42]: x = torch.tensor([1,2,3])


In [45]: x.repeat(4,2) 
Out[45]: tensor([[1, 2, 3, 1, 2, 3],
        [1, 2, 3, 1, 2, 3],
        [1, 2, 3, 1, 2, 3],
        [1, 2, 3, 1, 2, 3]])

In [46]: x.repeat(4,2).shape 
Out[46]: torch.Size([4, 6])

So far, so good.

But why does repeating just 1 time on 3rd dimension expand 3rd dim to 3 (not 1)?

[On the doc]

>>> x.repeat(4, 2, 1).size()
torch.Size([4, 2, 3])

Double checking.

In [43]: x.repeat(4,2,1)
Out[43]:
tensor([[[1, 2, 3],
         [1, 2, 3]],

        [[1, 2, 3],
         [1, 2, 3]],

        [[1, 2, 3],
         [1, 2, 3]],

        [[1, 2, 3],
         [1, 2, 3]]])

Why does it behave this way?

maybe, it consider unsqueezed shape, for example,

x = torch.tensor([1, 2, 3])
x.shape

give

torch.Size([3])

when we do,

x.repeat(4,2)

then,
it consider x shape to be

torch.size([1, 3])

and repeat this tensor 4 times along the 1st dim, 2 times along the 2nd dim, resulting in output shape to be,

torch.size([4, 6])

when we do,

x.repeat(4, 2, 1)

then, the unsqueezed shape of x be,

torch.size([1, 1, 3])

and repeat this tensor 4 times along the 1st dim, 2 times along the 2nd dim, 1 time along the 3rd dim, resulting in output shape to be,

torch.size([4, 2, 3])
1 Like

Thank you. Yes that’s my guess too. how should we confirm?

maybe implementation of torch.repeat is here,

where they say, add new leading dimensions to the tensor if the number of target dimensions is larger than the number of source dimensions, that is this,

1 Like