Why eval() mode still requires .detach()?

I have a model, i switch to eval() mode which should set requires_grad to False for all parameters. Why does it complain about “Can’t call numpy() on Variable that requires grad” when i convert the torch output to numpy?


.eval() is not setting requires_grad to False !
What eval does is to run your layers in evaluation mode. In particular, dropout will not drop anymore and batchnorm will use saved statistics instead of the ones computed on the fly.
If you want to disable the autograd, you should wrap you function in a with torch.no_grad(): block.


thanks for clarification