Strange Out of memory when running image detection tutorial

Hello,

PT newbie here. I’m in the process of learning PT for computer vision; I’m primarily interested in image detection use cases. I use a Quadro P600 GPU with Ubuntu 18.

I have a OOM problem with the following tutorial : https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html

I just copy the code in a Jupyter notebook and run it.

It rapidly throws a OOM exception when instantiating the model at the line
loss_dict = model(images, targets)
in the train_one_epoch function (I took it from here https://github.com/pytorch/vision/tree/master/references/detection)

If I force the execution to be done on cpu, I also get a OOM with a bit more details. It says that it tries to allocate about 22GB of memory.

This is a huge amount of memory. Is this really what is expected ? Am I doing something wrong ?

Many thanks by advance for any help you could provide.

If I’m not mistaken, your GPU should have only 2GB memory?
If so, it might not be enough for this tutorial.
Could you try to run in on Colab and check the memory usage on their GPUs?

Just tried. It works and Collab says that 2.2GB of RAM is used. I think you’re right, I need a GPU with more RAM… or maybe just use collab !

Many thanks for your help.