Model evaluation cost as much as memory in training

Hi ,
Anyone notice that the model cost as much as memory in validation (eval() mode) ?Sshouldn’t it be much smaller since backpropagation is not need ?

Best regards,
YW

Are you using volatile=True?

I couldn’t do it since the same variable is used in both training and validation. If I using volatile=True when I create the variable, it will not backpropagate the gradients in training.

eval mode has (almost) no impact on memory. It only changes how certain functions behave, e.g. batch norm, dropout.

I don’t quite understand your reasoning in not using volatile. You only need to set your input as volatile. This feature is almost specifically built for cases like yours.