I’ve created a queue in my forward function as shown below:
que_a = [torch.rand((1,256,256)) ] * 3 # init queue with len of 3 que_b = [torch.rand((1,256,256)) ] * 3 # init queue with len of 3 i=0 while i<1e7: # Model does something que_a.pop(0); que_b.pop(0) # we pop the first element in the list que_a.append(var1); que_b.append(var2) # var 1 & 2 is pushed to the stack i+=1
Over time this caused Cuda to run out of memory, Why does this happen?
RuntimeError: CUDA out of memory. Tried to allocate 2.00 MiB (GPU 0; 7.79 GiB total capacity; 4.50 GiB already allocated; 82.88 MiB free; 4.54 GiB reserved in total by PyTorch)