Out of GPU Memory during test, Model ouput (eval) & require_grad = True

Hi everyone!

I have several questions for you:

I’m new with pytorch and I’m trying to perform a test on my NN model with JupyterLab and there is something strange happening. It seems to require the same GPU memory capacity as training (for a same input size and a batch size of 1 for the training). So I think it could be due to the gradient maps that are saved during the back propagation (I can see that the GPU memory explodes just after I run the output = model(input) line). Do you know a way to see if the gradient maps are actually computed and saved?

If I do
my_model.eval()

to avoid the back propagation, do I also need to do something like

with torch.no_grad:
       model_input = some_tensor

or

with torch.no_grad:
       model_output = my_model(model_input)

?

And If I do something like this:

my_model.eval()
with torch.no_grad:
      model_output = my_model(model_input)

Is model_ouput.requires_grad supposed to be False ?

I’m working with pytorch 0.4.0 on Windows 10 and I have a 6Gb GPU.

Thank you!

Hi,

You have some more informations 'model.eval()' vs 'with torch.no_grad()') .
Let me know if you have more questions.

2 Likes

Hi,

Thank you, it helped and I finally figured out my problem!