Even though I picked r=23 which should just return 1 combination, I got the above mentioned error: CUDA out of memory error. Tried to allocate 7869836414.81 GiB (...).

it creates a list with 23 values like you said: [0, 1, ..., 22].
So I wanted to test how many different combinations torch.combinations() returns when I increase the r value, which determines how many values are in a combination.

I get an error for values of r > 7 I think, which makes sense because it finds a lot of different combinations, which take up a lot of memory. But when I tested the function with r=23, I was surprised to get a CUDA out of memory error, since the list is 23 elements long and torch.combinations() should therefore just return the original list again. I tried it for a smaller list as well:

Ow, sorry about my first answer, I was totally somewhere else. Actually, it seems you are right, it should return only a list of 22 numbers. The strange part is if you run your code for i=1 to i=9 it just works fine but after that it does not work and tries to allocate 9GB!

I ran below code:

import time
for i in range(1, 10):
t = time.time()
values = torch.arange(i)
combinations = torch.combinations(values, r=i, )
print('for i=',i,' => ',time.time() - t)
print(combinations)
# here is the strange output (look at the time)
# for i= 1 => 0.0003619194030761719
# tensor([[0]])
# for i= 2 => 0.0005240440368652344
# tensor([[0, 1]])
# for i= 3 => 0.00019550323486328125
# tensor([[0, 1, 2]])
# for i= 4 => 0.000286102294921875
# tensor([[0, 1, 2, 3]])
# for i= 5 => 0.0006687641143798828
# tensor([[0, 1, 2, 3, 4]])
# for i= 6 => 0.001373291015625
# tensor([[0, 1, 2, 3, 4, 5]])
# for i= 7 => 0.018665552139282227
# tensor([[0, 1, 2, 3, 4, 5, 6]])
# for i= 8 => 0.36806702613830566
# tensor([[0, 1, 2, 3, 4, 5, 6, 7]])
# for i= 9 => 8.350317239761353
# tensor([[0, 1, 2, 3, 4, 5, 6, 7, 8]])

For i=10 I could not get any answer!

Here is the code using itertools which is the base code of torch.combinations: