How to calculate validation loss for faster RCNN?

It can be found while using model.train(). But it is not the right method to use it under the model.train() since batch normalization, dropout, etc become deactivate in evaluation mode and not in train model. In evaluation (model.eval()) mode, it is unable to find the loss.

It is almost the same question as Compute validation loss for Faster RCNN . Pls help.

@ptrblck pls help me to overcome this issue.

Would it work, if you call .eval() only on all dropout and batchnorm layers, while the parent module is kept in the training state?

@ptrblck f I call model.eval(). I will be only able to predict the results. But unable to get validation loss.
I can get it as:

with torch.no_grad():
for image,target in val_loader:
…

But I can’t put model to evaluation state. Is there any way I could calculate validation loss. (Because if i do it in model.train() batch normalization and dropout will be active)

@ptrblck. I used torchvision model(fasterrcnn). So do I need to edit source code for that.

I’m not suggesting to call model.eval(), but .eval() only on dropout and batchnorm layers.
Why wouldn’t you be able to calculate the validation loss using this approach?

1 Like

@ptrblck…Thanks…it sounds great…Let me try that way