Training and testing dataloader not sharing memory

Hi,

My code is pretty standard, I have a training and testing loop over train and test sets. I’m using a dataloader object for both. The problem I’m having is that when the training loop is over, the train_loader doesn’t release its GPU memory. Therefore, when I start iterating over the test_loader (with same batch size) I get a memory error. Is there a way to tell a dataloader to relinquish its GPU memory in order to avoid such error ?

Many thanks,
Lucas

bump bump bump bump bump :slight_smile:

Perhaps you are holding some Variables from training loop and thus not freeing the graph? It’s hard to tell without seeing your code.

It’s likely unrelated to dataloader though.

Hi,

thanks for your answer. The reason I think it’s related to the DataLoader is that the memory used it directly correlated with the batch size. I tried doing del loss, del output, but didn’t see any changes. Here is my code if you wouldn’t mind taking a look :slight_smile: https://github.com/pclucas14/pixel-cnn-pp/blob/master/main.py

Thanks!

You were right, nothing to do with DataLoader. I had 2 outputs briefly overlapping in memory.

Thanks!