Hi everybody,
Is there a current API that’s behave similarly to numpy, where we can repeat each row individually ?
x = np.array([[1,2],[3,4]])
np.repeat(x, [1, 2], axis=0)
array([[1, 2],
[3, 4],
[3, 4]])
I didn’t find any similar function. What would be the most efficient way of doing this and still backpropagating ?
Thank you in advance
1 Like
otutay
(noname)
April 1, 2019, 6:42am
2
search for torch.repeat in the following document
https://pytorch.org/docs/stable/tensors.html
x = torch.tensor([[1,2],[3,4]])
x.repeat(1,2)
Should work the way you want
justusschock:
x = torch.tensor([[1,2],[3,4]]) x.repeat(1,2)
I don’t obtain the same output:
In [9]: import numpy as np
In [10]: import torch
In [11]: x = torch.tensor([[1,2],[3,4]])
...: x.repeat(1,2)
Out[11]:
tensor([[1, 2, 1, 2],
[3, 4, 3, 4]])
In [12]: x = np.array([[1,2],[3,4]])
In [13]: np.repeat(x, [1,2], axis=0)
Out[13]:
array([[1, 2],
[3, 4],
[3, 4]])
Could you provide an example that produces the same output as numpy ?
In [9]: import numpy as np
In [10]: import torch
In [11]: x = torch.tensor([[1,2],[3,4]])
...: x.repeat(1,2)
Out[11]:
tensor([[1, 2, 1, 2],
[3, 4, 3, 4]])
In [12]: x = np.array([[1,2],[3,4]])
In [13]: np.repeat(x, [1,2], axis=0)
Out[13]:
array([[1, 2],
[3, 4],
[3, 4]])
Do you think there would be a better way than
In [1]: import torch
In [2]: import numpy as np
In [3]: x = torch.FloatTensor([[1,2],[3,4]])
...: xx = x.split(1)
...:
In [4]: xx
Out[4]: (tensor([[1., 2.]]), tensor([[3., 4.]]))
In [5]: out = torch.FloatTensor([])
...:
...: for x_sub, num_repeat in zip(xx, [1,2]):
...: ^Iout = torch.cat([out, x_sub.expand(num_repeat, -1)])
...:
In [6]: out
Out[6]:
tensor([[1., 2.],
[3., 4.],
[3., 4.]])
In [7]: x = np.array([[1,2],[3,4]])
In [8]: np.repeat(x, [1,2], axis=0)
Out[8]:
array([[1, 2],
[3, 4],
[3, 4]])
Is this work for sure with backpropagation ?
Hello,
I tried the code below, and it seems that it works with backpropagation.
x = torch.tensor([[1, 2], [3, 4]], requires_grad=True)
a, b = x.split(1)
a.requires_grad
>> True
a.grad_fn
>> <SplitBackward at 0x7f1ed6f25128>
out = torch.FloatTensor([])
torch.cat([out, a.expand(1,-1)])
>> tensor([[1., 2.]], grad_fn=<CatBackward>)
Thank you for your answer !
I’m just thinking in terms of efficiency, if it makes the computation much slower. Do you have an idea ?
Einops recently got support for repeat-like patterns. Examples:
# np.repeat behavior, repeat rows (copies are in succession like aaabbbcccddd)
einops.repeat(x, 'i j -> (i copy) j', copy=3)
# np.repeat behavior, repeat columns (copies are in succession like aaabbbcccddd)
einops.repeat(x, 'i j -> i (j copy)', copy=3)
# np.tile behavior (whole sequence is repeated 3 times like abcdabcdabcd)
einops.repeat(x, 'i j -> (copy i) j', copy=3)
You can repeat/tile multiple axes independently within one operation.
You can do this with repeat_interleave .
It even includes your exact example in its documentation (I am guessing they introduced it after your post and perhaps even because of it):
>>> y = torch.tensor([[1, 2], [3, 4]])
>>> torch.repeat_interleave(y, 2)
tensor([1, 1, 2, 2, 3, 3, 4, 4])
>>> torch.repeat_interleave(y, 3, dim=1)
tensor([[1, 1, 1, 2, 2, 2],
[3, 3, 3, 4, 4, 4]])
>>> torch.repeat_interleave(y, torch.tensor([1, 2]), dim=0)
tensor([[1, 2],
[3, 4],
[3, 4]])
2 Likes
x = torch.Tensor(([[1,2],[3,4]]))
torch.repeat_interleave(x, torch.tensor([1, 2]), dim=0)