PT newbie here. I’m in the process of learning PT for computer vision; I’m primarily interested in image detection use cases. I use a Quadro P600 GPU with Ubuntu 18.
I have a OOM problem with the following tutorial : https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html
I just copy the code in a Jupyter notebook and run it.
It rapidly throws a OOM exception when instantiating the model at the line
loss_dict = model(images, targets)
train_one_epoch function (I took it from here https://github.com/pytorch/vision/tree/master/references/detection)
If I force the execution to be done on cpu, I also get a OOM with a bit more details. It says that it tries to allocate about 22GB of memory.
This is a huge amount of memory. Is this really what is expected ? Am I doing something wrong ?
Many thanks by advance for any help you could provide.