How to manage RAM capacity while loading DataLoader in deep learning?

Hello every one.

I am working on video processing and as data samples, i need short video clips (around 10 frame per each).
I have a dataset, composed of consecutive images, and what i want to do is combining consecutive 10 frames to make a cuboid. Now the problem is: for big datasets or even for multiple normal datasets, when i create numpy arrays containing plenty of cuboids (10 * 227* 227 : 10 frames of size 227*227), the Computer RAM gets fully occupied and kernel dies!!

Maybe my algorithm has a bad mistake and i am unaware of a simple way, but i would be haapy if anyone give me useful hints to manage my RAM

Thank you very much

The storage of 10 frames of shape 227x227 shouldn’t cause any problems itself.
Assuming that you are dealing with RGB images, you would only use approx. 10*227*227*3*4 / 1024**2 = 5.9MB, if you store the images in FP32.

Do you run out of memory directly after loading the data or could some other code part (e.g. model forward/backward) cause this issue?

1 Like

Thank you very much @ptrblck for your reply.
First, there are plenty of these cuboids ( 4 datasets such as UCSD, AVenue,…). Even considering the possible augmentations, it gets even higher.
yes in fact i read, process the datasets first, to make cuboid and it makes me run out of memory even before training! (whole 32GB RAM)
some times i think about loading dataset while training ( not pre-loading the all dataset at once) but i think it makes the training so slow ( because GPU should wait for new data samples from CPU, if i am not wrong!).
what is the standard approach for large datasets?

Hi,

You should not load whole datasets into your RAM if you do not have enough RAM to do that. You should use lazy loading provided in DataLoaders in PyTorch. As you mentioned, You have to prepare next data when your GPU is processing.

By the way, This is best and most efficient solution to task such this regarding huge datasets.

1 Like

Than you very much @Nikronic.
thanks for your great information.