I have several questions for you:
I’m new with pytorch and I’m trying to perform a test on my NN model with JupyterLab and there is something strange happening. It seems to require the same GPU memory capacity as training (for a same input size and a batch size of 1 for the training). So I think it could be due to the gradient maps that are saved during the back propagation (I can see that the GPU memory explodes just after I run the output = model(input) line). Do you know a way to see if the gradient maps are actually computed and saved?
If I do
to avoid the back propagation, do I also need to do something like
with torch.no_grad: model_input = some_tensor
with torch.no_grad: model_output = my_model(model_input)
And If I do something like this:
my_model.eval() with torch.no_grad: model_output = my_model(model_input)
model_ouput.requires_grad supposed to be
I’m working with pytorch 0.4.0 on Windows 10 and I have a 6Gb GPU.