Tried to allocate 7869836414.81 GiB

I wanted to check how many combinations the torch.combinations() function would return for different r values.

values = torch.arange(23, dtype=torch.uint8, device="cuda:0")
combinations = torch.combinations(values, r=23)

Even though I picked r=23 which should just return 1 combination, I got the above mentioned error: CUDA out of memory error. Tried to allocate 7869836414.81 GiB (...).

Why does this happen?

Hi,

This line generates a list [0, 1, ..., 22]. If you want just a single number, close the range on it [23, 24].

values = torch.arange(23, 24, dtype=torch.uint8, device="cuda:0")

Bests

Thanks for your reply.

I mean when I run:

values = torch.arange(23, dtype=torch.uint8, device="cuda:0")

it creates a list with 23 values like you said: [0, 1, ..., 22].
So I wanted to test how many different combinations torch.combinations() returns when I increase the r value, which determines how many values are in a combination.

I get an error for values of r > 7 I think, which makes sense because it finds a lot of different combinations, which take up a lot of memory. But when I tested the function with r=23, I was surprised to get a CUDA out of memory error, since the list is 23 elements long and torch.combinations() should therefore just return the original list again. I tried it for a smaller list as well:

values = torch.arange(5)
combinations = torch.combinations(values, r=5)
# len(combinations) = 1

So I was wondering why I got an error with my first example.

1 Like

Ow, sorry about my first answer, I was totally somewhere else. Actually, it seems you are right, it should return only a list of 22 numbers. The strange part is if you run your code for i=1 to i=9 it just works fine but after that it does not work and tries to allocate 9GB!

I ran below code:

import time
for i in range(1, 10):
    t = time.time()
    values = torch.arange(i)
    combinations = torch.combinations(values, r=i, )
    print('for i=',i,' => ',time.time() - t)
    print(combinations)

# here is the strange output (look at the time)
# for i= 1  =>  0.0003619194030761719
# tensor([[0]])
# for i= 2  =>  0.0005240440368652344
# tensor([[0, 1]])
# for i= 3  =>  0.00019550323486328125
# tensor([[0, 1, 2]])
# for i= 4  =>  0.000286102294921875
# tensor([[0, 1, 2, 3]])
# for i= 5  =>  0.0006687641143798828
# tensor([[0, 1, 2, 3, 4]])
# for i= 6  =>  0.001373291015625
# tensor([[0, 1, 2, 3, 4, 5]])
# for i= 7  =>  0.018665552139282227
# tensor([[0, 1, 2, 3, 4, 5, 6]])
# for i= 8  =>  0.36806702613830566
# tensor([[0, 1, 2, 3, 4, 5, 6, 7]])
# for i= 9  =>  8.350317239761353
# tensor([[0, 1, 2, 3, 4, 5, 6, 7, 8]])

For i=10 I could not get any answer!

Here is the code using itertools which is the base code of torch.combinations:


import itertools

for i in range(1, 11):
    t = time.time()
    values = torch.arange(i)
    combinations = list(itertools.combinations(values, r=i))
    combinations = torch.tensor(combinations)
    print('for i=',i,' => ',time.time() - t)
    print(combinations)

# output

# for i= 1  =>  0.0003228187561035156
# tensor([[0]])
# for i= 2  =>  0.0002307891845703125
# tensor([[0, 1]])
# for i= 3  =>  7.605552673339844e-05
# tensor([[0, 1, 2]])
# for i= 4  =>  0.0005385875701904297
# tensor([[0, 1, 2, 3]])
# for i= 5  =>  0.00017261505126953125
# tensor([[0, 1, 2, 3, 4]])
# for i= 6  =>  0.0003154277801513672
# tensor([[0, 1, 2, 3, 4, 5]])
# for i= 7  =>  9.918212890625e-05
# tensor([[0, 1, 2, 3, 4, 5, 6]])
# for i= 8  =>  0.00019240379333496094
# tensor([[0, 1, 2, 3, 4, 5, 6, 7]])
# for i= 9  =>  9.393692016601562e-05
# tensor([[0, 1, 2, 3, 4, 5, 6, 7, 8]])
# for i= 10  =>  0.0001595020294189453
# tensor([[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]])

And the expected part is that it creates all the answers within same amount of time.
Can you report it on Github account of PyTorch?

Edit: add link to issue

Thank you

1 Like

Thanks @Nikronic. Yes I’ll do that