The obvious choice would be repeat. So I have a one dimensional tensor which is basically a column. I want to repeat that column three times on the second dimension (and one times on the first dimension), so I write:
tr = t.repeat(1, 3)
I get a tensor of size 1x9.
I simply don’t get the logic behind this. For achieve that I want I have to write:
tr = t.repeat(3, 1).t()
It looks like first dimension is somehow transposed to the second when doing repeat, which is kinda confusing. Is it the intended behavior?