CUDA out of memory is a frequent error

I’ve been training VGG16 from scratch on the flower recognition dataset and after the first epoch itself CUDA runs out of memory. Is there a way out of this? I recently studies tensorflow.js and it had a command called tf.tidy() which would delete tensors as soon as their usage was over for proper memory usage. Does torch have a provision like that?

What batch size are you using? If the batch size is too large, it will cause a out of memory error.

Can you post your training code?