How to efficiently perform random array assignments and stack the results?

Basic requirement: suppose I have an array of [9, 8, 7], and I can generate two random assignment indices arrays, suppose they are [1, 1, 0] and [0, 2, 1], so the assignment results are [8, 8, 9] and [9, 7, 8], and finally I want to get the stacked result: [[8, 8, 9], [9, 7, 8]].

I can achieve the above function by the following code with a for-loop:

import torch
my_array = torch.Tensor([9, 8, 7])
for i in range(2):  # for each layer
    my_assignment_indices = torch.randint(0, 3, size=(3,))  # for each location, randomly choose a source index
    current_assignment_array = my_array[my_assignment_indices].unsqueeze(0)  # move (assign)
    if i == 0:
        result = current_assignment_array  # the first layer, create result
        result =[result, current_assignment_array])  # the subsequent layers, stack them together
print(result)  # output my result

In practice, I have two parameters: N (array length), K (layer number).
Notice that each index is randomly chosen from [0, …, N-1].

Now I am confused by the following two problems:

  1. How can I perform the above process efficiently with very large N and K on CPU or GPU?
  2. How can I randomly assign K different elements to each location? The above code does not guarantee that each two indices for a location must be different.

I’m waiting for a better solution, thank you for watching my problems :smile:.

It seems that this problem is very hard… I tried several times and failed. :sweat_smile:

  1. Instead of using you could append the results to a list and convert it to a tensor afterwards.

  2. You could use torch.randperm and slice the first k indices. This would make sure that all indices are unique but shuffled.

1 Like