One way would be to use advanced indexing and the stdlib function random.shuffle. I used it on a list rather than a call to torch.arange as shuffle seems to go against torch's semantics under the hood.
I can’t* think of a way to do this using built-in tensor
functions without a loop.
*) Well, actually I can, but with a cost in efficiency.
Try this (for m = 4, n = 3):
import torch
mat = torch.tensor ([[11.0, 12, 13],[21, 22, 23],[31, 32, 33],[41, 42, 43]])
ind = torch.rand (4, 3).argsort (dim = 0)
res = torch.zeros (4, 3).scatter_ (0, ind, mat)
The computational time complexity of your task
should be m * n. (You have n columns and the
cost of randomly permuting a length-m column
is m.)
But the cost of sorting a length-m column
is m * log (m), so my scheme has cost n * m * log (m).
The point is that I can’t figure out how to get
the randomly permuted columns of indices
without a loop or using the sort trick.
(Note that tymokvo’s approach is applying the same
random permutation to each of the rows. Antoine is
asking for distinct random permutations for (in his
case) each of the columns, as his loop-based solution
does. Also, for reasons I don’t understand – tymokvo’s
code looks right for what it does – the final result in
tymokvo’s post has a duplicated column, (1, 5, 9, 13),
and a missing column (3, 7, 11, 15).)