A out of memory error when using list with tensor

import torch
n = 1000
y = torch.rand(5000,5000,10)
a = []
for i in range(n):
    x = torch.rand(5000, 5000, 10)
    y = torch.cat([x[:5], y], axis=0)
    a.append(y[0,0,0])

I try this code, but found the memory grow dramatically. But if I remove the code of list a, then the problem goes away. What’s the problem?

y takes ~1GB of memory and you are concatenating itself with x[:5] thus increasing it in each iteration by a bit (the temp. tensor creation would of course use additional memory). Afterwards you are then appending a single tensor, so your peak memory should increase by ~2GB (with the slight increase in each iteration).

Maybe I should clarify, The snippet I show will occupy memory far more then ~2GB. I suspect in every iteration, the variable x is not be released properly. But if I remove all code about the list a (the code below), the memory occupied is around ~2GB.

import torch
n = 1000
y = torch.rand(5000,5000,10)
for i in range(n):
    x = torch.rand(5000, 5000, 10)
    y = torch.cat([x[:5], y], axis=0)

So I don’t know the differences.

hello, I use a more cleaner snippet to clarify my problem, this code will cause memory occupation increasing rapidly.

import torch
n = 1000
a = []
for i in range(n):
    x = torch.rand(5000, 5000, 10)
    a.append(x[0, 0, 0])

But the equivalent version for numpy will not

import numpy as np
n = 1000
a = []
for i in range(n):
    x = np.random.rand(5000, 5000, 10)
    a.append(x[0, 0, 0])