How to tile a tensor?


(Marc) #1

If I have a tensor like:

z = torch.FloatTensor([[1,2,3],[4,5,6]])

1 2 3
4 5 6

How might I turn it into a tensor like:

1 2 3
1 2 3
1 2 3
1 2 3
4 5 6
4 5 6
4 5 6
4 5 6

I imagine that torch.repeat() is somehow in play here.

The only solution I have come up with is to do:

z.repeat(1,4).view(-1, 3)

Is there one operation that collapses these two commands into one?

Moreover, if I have columnwise data I want to repeat, how can I do this without transposing the data back and forth? For example, going from

z = torch.FloatTensor([[1,2,3],[4,5,6],[7,8,9]])

1 2 3
4 5 6
7 8 9

to

1 1 1 2 2 2 3 3 3
4 4 4 5 5 5 6 6 6
7 7 7 8 8 8 9 9 9

without saying

z.transpose(0,1).repeat(1,3).view(-1, 3).transpose(0,1)


Expand a tendor
#2

For the second you can do:

z.view(-1, 1).repeat(1, 3).view(3, 9)
1 1 1 2 2 2 3 3 3
4 4 4 5 5 5 6 6 6
7 7 7 8 8 8 9 9 9

For the first, I don’t think there are operations that combine all of these together. Maxunpool does something similar but doesn’t have the repeat ability.


Expand a tendor
(jpeg729) #3

It may be worth mentioning that .view doesn’t change or copy the underlying data in any way, it just changes the strides of the view mechanism.

Hence, calling .view is an extremely fast operation.


(Eddie) #4

For a general solution working on any dimension, I implemented tile based on the .repeat method of torch’s tensors:

def tile(a, dim, n_tile):
    init_dim = a.size(dim)
    repeat_idx = [1] * a.dim()
    repeat_idx[dim] = n_tile
    a = a.repeat(*(repeat_idx))
    order_index = torch.LongTensor(np.concatenate([init_dim * np.arange(n_tile) + i for i in range(init_dim)]))
    return torch.index_select(a, dim, order_index)

Examples:

t = torch.FloatTensor([[1,2,3],[4,5,6]])
Out[54]: 
tensor([[ 1.,  2.,  3.],
        [ 4.,  5.,  6.]])
  • Across dim 0:
tile(t,0,3)
Out[53]: 
tensor([[ 1.,  2.,  3.],
        [ 1.,  2.,  3.],
        [ 1.,  2.,  3.],
        [ 4.,  5.,  6.],
        [ 4.,  5.,  6.],
        [ 4.,  5.,  6.]])
  • Across dim 1:
tile(t,1,2)
Out[55]: 
tensor([[ 1.,  1.,  2.,  2.,  3.,  3.],
        [ 4.,  4.,  5.,  5.,  6.,  6.]])

No benchmarking performed, though :slight_smile:


Repeat examples along batch dimension
#5

Nice! What you wrote is I guess the equivalent to numpy.repeat (just the interface is with swapped arguments), however it would be nice to have it without numpy.

tile(torch.arange(5), dim=0, n_tile=2)
Out: tensor([0, 0, 1, 1, 2, 2, 3, 3, 4, 4])

and in numpy

np.repeat(np.arange(5), repeats=2, axis=0)
Out: array([0, 0, 1, 1, 2, 2, 3, 3, 4, 4])

(Adarsh) #6

Agreed, especially if we wanted to carry the accumulated gradient if say it was a Variable.


#7

I guess it should then be

def repeat_np(a, repeats, dim):
    """
    Substitute for numpy's repeat function. Taken from https://discuss.pytorch.org/t/how-to-tile-a-tensor/13853/2
    torch.repeat([1,2,3], 2) --> [1, 2, 3, 1, 2, 3]
    np.repeat([1,2,3], repeats=2, axis=0) --> [1, 1, 2, 2, 3, 3]

    :param a: tensor
    :param repeats: number of repeats
    :param dim: dimension where to repeat
    :return: tensor with repitions
    """

    init_dim = a.size(dim)
    repeat_idx = [1] * a.dim()
    repeat_idx[dim] = repeats
    a = a.repeat(*(repeat_idx))
    if a.is_cuda:  # use cuda-device if input was on cuda device already
        order_index = torch.cuda.LongTensor(
            torch.cat([init_dim * torch.arange(repeats, device=a.device) + i for i in range(init_dim)]))
    else:
        order_index = torch.LongTensor(
            torch.cat([init_dim * torch.arange(repeats) + i for i in range(init_dim)]))

    return torch.index_select(a, dim, order_index)

(Yang Kai) #8

it’s easy, just like

z.unsqueeze(0).transpose(0,1).repeat(1,4,1).view(-1,3)

(Manu NALEPA) #9

Or even more simply:

z.unsqueeze(1).repeat(1,4,1).view(-1,3)