How to avoid memory out from python list

Hi, all.
I want to append features to a list.
but out of memory occurs.

Here is my codes.

for epoch in range(total_epochs):
   list_a = []
   for step, data in range(data_loader):
        ...
         if some_condition:
              list_a.append(features.cpu()) # (both cuda() and cpu() occurs memory out)
   data_loader_a = torchdata.DataLoader(list_a, ...)
   for step, data in range(data_loader_a):
       ... train process ...

How can I solve this problem?

1 Like

I guess features is still attached to the computation graph, thus all intermediate tensors (which would be needed for a backward call), are still on the device.
If you don’t need to backpropagate using the features stored in list_a, you could just detach the tensors before appending to the list via:

list_a.append(features.detach().cpu())
1 Like