Using eval() when I want to use the net as a loss

Hi All,
I want to use a net as part of loss function, but the problem is that the net outputs different values when I use eval() and train(), it actually outputs much better when I use eval().
But, when I use eval(), I get the following error when I try to use the net as part of a loss:
RuntimeError: cudnn RNN backward can only be called in training mode

So I want to get the same net output as using eval() and also avoid the mentioned error.
Any help will be appreciated.

I found some workaround. declare net.train() then for any Dropout and BatchNorm layers:

    net.dropout.p = 0
    net.batchNorm.training = False

It seems to work.

1 Like