Memory Leak - DataLoader

I’ve got a problem with memory leak during training. I suspect the main cause of that problem is Dataset created by using torchvision.datasets.ImageFolder, (when I used torchvision.datasets.CIFAR10 instead of my dataset the problem does not occur) . I’ve tried to find a solution on similar topics.
Here is my dataset

and dataloader:

Do you have any idea what could be a reason of a memory leak ?

Thanks in an advance


Could you give more details on how you measure the memory?
Also do you have a small code sample I could run locally to reproduce this?

To be honest I checked it on task manager. On linux I get runtime error during training (I suspected that it was related with small ram capacity of my gpu, but when I use CPU my RAM fills up during a training )

Code :


In your screenshot, it seems like there is no leak, just high usage.
If you use many workers or if your dataset has larger images or larger batchsize, then the memory needed to load the data will be bigger.

I see now, that was stupid question from me :slight_smile:
Thanks for help

1 Like