My code is pretty standard, I have a training and testing loop over train and test sets. I’m using a dataloader object for both. The problem I’m having is that when the training loop is over, the train_loader doesn’t release its GPU memory. Therefore, when I start iterating over the test_loader (with same batch size) I get a memory error. Is there a way to tell a dataloader to relinquish its GPU memory in order to avoid such error ?
thanks for your answer. The reason I think it’s related to the DataLoader is that the memory used it directly correlated with the batch size. I tried doing del loss, del output, but didn’t see any changes. Here is my code if you wouldn’t mind taking a look https://github.com/pclucas14/pixel-cnn-pp/blob/master/main.py