How to get the size of the memory needed to train a model?

Hi, I am working on am image classification problem using pythorch and I was wondering how do I get the required memory size to train the model ? some might suggest summary I tried that but I am doubting since I have a ViT (a basic one not a large or a tiny has less MB compared to a resnet 56 )?

Regrads

As long you can fit a batch size of 1 on the gpu, you can use gradient accumulation.

See here.

As for calculating the GPU size, it’s a bit complicated with models using convolutions. This is because it also depends on your image size, the number and size of layers, the dtype, kernel size, optimizer, model.train() vs. model.eval(), and the batch size.

The chart on this page gives the parameter sizes between various pretrained vision models for Pytorch. That can generally be used as a relative comparison, but you’ll need to use some trial and error, depending on how you set the batch size, image size, etc.

1 Like

Hi thanks for the reply, probably I miss spoke my self, I want to measure how much my model require so, I can tell if this model can be trained on smart phone or raspberry pi or a Computer depending on it needed memory and which device hardware is able to handle it ?Thanks in advance