Please find attach a very simple code that just stocks VGG outputs:
It requires over 5 GB of RAM on my laptop. And this after that the model is loaded (in the loop).
Any ideas ?
You are keeping track of the full history of the outputs by saving the output variable,
You might want to do instead
l =  for i in range(10): out = vgg19(Variable(torch.randn([1,3,224,224]))) l.append(out.data)
Thank you very much that solved the problem indeed!
Need to get a deeper look on pytorch architecture :))
What happens is that pytorch allocates the intermediate buffers on demand, and frees then as soon as they go out of scope.
Variables keep the history of all the computations that were performed before it.
A common mistake is to do
current_loss += loss, and this will not free the memory because you will be keeping track of the whole history of computations. You should do instead
current_loss += loss.data for example.
Also, if you only want to perform forward pass computations, using
volatile variables will save you a ton of memory.
Thank you very much for your explanation!