How to tile a tensor?

If I have a tensor like:

z = torch.FloatTensor([[1,2,3],[4,5,6]])

1 2 3
4 5 6

How might I turn it into a tensor like:

1 2 3
1 2 3
1 2 3
1 2 3
4 5 6
4 5 6
4 5 6
4 5 6

I imagine that torch.repeat() is somehow in play here.

The only solution I have come up with is to do:

z.repeat(1,4).view(-1, 3)

Is there one operation that collapses these two commands into one?

Moreover, if I have columnwise data I want to repeat, how can I do this without transposing the data back and forth? For example, going from

z = torch.FloatTensor([[1,2,3],[4,5,6],[7,8,9]])

1 2 3
4 5 6
7 8 9

to

1 1 1 2 2 2 3 3 3
4 4 4 5 5 5 6 6 6
7 7 7 8 8 8 9 9 9

without saying

z.transpose(0,1).repeat(1,3).view(-1, 3).transpose(0,1)

3 Likes

For the second you can do:

z.view(-1, 1).repeat(1, 3).view(3, 9)
1 1 1 2 2 2 3 3 3
4 4 4 5 5 5 6 6 6
7 7 7 8 8 8 9 9 9

For the first, I don’t think there are operations that combine all of these together. Maxunpool does something similar but doesn’t have the repeat ability.

9 Likes

It may be worth mentioning that .view doesn’t change or copy the underlying data in any way, it just changes the strides of the view mechanism.

Hence, calling .view is an extremely fast operation.

1 Like

For a general solution working on any dimension, I implemented tile based on the .repeat method of torch’s tensors:

def tile(a, dim, n_tile):
    init_dim = a.size(dim)
    repeat_idx = [1] * a.dim()
    repeat_idx[dim] = n_tile
    a = a.repeat(*(repeat_idx))
    order_index = torch.LongTensor(np.concatenate([init_dim * np.arange(n_tile) + i for i in range(init_dim)]))
    return torch.index_select(a, dim, order_index)

Examples:

t = torch.FloatTensor([[1,2,3],[4,5,6]])
Out[54]: 
tensor([[ 1.,  2.,  3.],
        [ 4.,  5.,  6.]])
  • Across dim 0:
tile(t,0,3)
Out[53]: 
tensor([[ 1.,  2.,  3.],
        [ 1.,  2.,  3.],
        [ 1.,  2.,  3.],
        [ 4.,  5.,  6.],
        [ 4.,  5.,  6.],
        [ 4.,  5.,  6.]])
  • Across dim 1:
tile(t,1,2)
Out[55]: 
tensor([[ 1.,  1.,  2.,  2.,  3.,  3.],
        [ 4.,  4.,  5.,  5.,  6.,  6.]])

No benchmarking performed, though :slight_smile:

13 Likes

Nice! What you wrote is I guess the equivalent to numpy.repeat (just the interface is with swapped arguments), however it would be nice to have it without numpy.

tile(torch.arange(5), dim=0, n_tile=2)
Out: tensor([0, 0, 1, 1, 2, 2, 3, 3, 4, 4])

and in numpy

np.repeat(np.arange(5), repeats=2, axis=0)
Out: array([0, 0, 1, 1, 2, 2, 3, 3, 4, 4])
1 Like

Agreed, especially if we wanted to carry the accumulated gradient if say it was a Variable.

I guess it should then be

def repeat_np(a, repeats, dim):
    """
    Substitute for numpy's repeat function. Taken from https://discuss.pytorch.org/t/how-to-tile-a-tensor/13853/2
    torch.repeat([1,2,3], 2) --> [1, 2, 3, 1, 2, 3]
    np.repeat([1,2,3], repeats=2, axis=0) --> [1, 1, 2, 2, 3, 3]

    :param a: tensor
    :param repeats: number of repeats
    :param dim: dimension where to repeat
    :return: tensor with repitions
    """

    init_dim = a.size(dim)
    repeat_idx = [1] * a.dim()
    repeat_idx[dim] = repeats
    a = a.repeat(*(repeat_idx))
    if a.is_cuda:  # use cuda-device if input was on cuda device already
        order_index = torch.cuda.LongTensor(
            torch.cat([init_dim * torch.arange(repeats, device=a.device) + i for i in range(init_dim)]))
    else:
        order_index = torch.LongTensor(
            torch.cat([init_dim * torch.arange(repeats) + i for i in range(init_dim)]))

    return torch.index_select(a, dim, order_index)

it’s easy, just like

z.unsqueeze(0).transpose(0,1).repeat(1,4,1).view(-1,3)
3 Likes

Or even more simply:

z.unsqueeze(1).repeat(1,4,1).view(-1,3)

5 Likes
# Repeat along any dimension
import numpy as np
def repeat(x, n, dim):
    if dim == -1:
        dim = len(x.shape) - 1
    return x.view(int(np.prod(x.shape[:dim+1])), 1, int(np.prod(x.shape[dim+1:]))).repeat(1,n,1).view(*x.shape[:dim], n * x.shape[dim], *x.shape[dim+1:])
1 Like

The best solution in my opinion! Would be great if pytorch implemented an equivalent to numpy tile method though.

A little function that builds on top of @Yang_Kai’s answer and provides an easy way to implement a tile function for 2D tensors:

def torch_tile(tensor, dim, n):
    """Tile n times along the dim axis"""
    if dim == 0:
        return tensor.unsqueeze(0).transpose(0,1).repeat(1,n,1).view(-1,tensor.shape[1])
    else:
        return tensor.unsqueeze(0).transpose(0,1).repeat(1,1,n).view(tensor.shape[0], -1)
3 Likes

For anyone new looking for this issue, an updated function has also been introduced in pytorch - torch.repeat_interleave() to address this issue in a single operation.

So one can use torch.repeat_interleave(z, repeats=3, dim=0) to obtain:

tensor([[1., 2., 3.],
        [1., 2., 3.],
        [1., 2., 3.],
        [4., 5., 6.],
        [4., 5., 6.],
        [4., 5., 6.]])

and similarly can use torch.repeat_interleave(z, repeats=3, dim=1) to obtain:

tensor([[1., 1., 1., 2., 2., 2., 3., 3., 3.],
        [4., 4., 4., 5., 5., 5., 6., 6., 6.]])
11 Likes

Thank you very much, this is really what I look for in a forum.

Einops recently got support for various repeat-like patterns. Examples:

>>> x
tensor([[0, 1],
        [2, 3]])

Tile over the first axis:

>>> from einops import repeat
>>> repeat(x, 'i j -> (tile i) j', tile=2)
tensor([[0, 1],
        [2, 3],
        [0, 1],
        [2, 3]])

Tile over the second axis:

>>> repeat(x, 'i j -> i (tile j)', tile=2)
tensor([[0, 1, 0, 1],
        [2, 3, 2, 3]])

Tiling over both axes:

repeat(x, 'i j -> (tilei i) (tilej j)', tilei=2, tilej=3)
2 Likes