Compute validation loss for Faster RCNN

Hi, I’m doing object detection on a custom dataset using transfer learning from a pretrained Faster RCNN model.
I would like to compute validation loss at the end of each epoch. How can this be done?

If I run the code below (model in training mode) I get losses, but dropout isn’t deactivated, so I am wondering how ‘valid’ are these loss values. And running the model in eval mode only returns the predictions.

model.train()
for images, targets in data_loader_val:
    images = [image.to(device) for image in images]
    targets = [{k: v.to(device) for k, v in t.items()} for t in targets]

    with torch.no_grad():
        val_loss_dict = model(images, targets)
        print(val_loss_dict)
1 Like

I’m wondering the same thing. Did you find a solution? I was thinking of forcing the training mode on only some submodules (the ones that output losses).

I thought it through and came to the conclusion that validation loss is only meaningful when considered relatively to training loss. Training loss is computed with dropouts too, so they are comparable.

I guess for dropout I might be ok, but in general wouldn’t that screw modules like batch norm that keep running estimates?

Hello,

Did you reach any conclusion? I am working also at object detection in my custom dataset and I would like to check validation and training losses evolution, but I’m not sure if it is a good practice to use the .train() mode during evaluation.

@mapostig No, I guess it’s not a good practice to use model.train() mode in evaluation. You can use the same custom dataset class to create a different dataset loader for your evaluation dataset.

for phase in ['train','val']:
    if phase == 'train':
        model.train()
        #Training Part with backprog
    else:
        model.eval()
        #Just a forward Pass.

Some layers like Dropout and batch Norm will behaves differently under model.eval().
Further you can look to this discussion.

Even Iam stuck at the same place. Is it possible to calculate validation loss properly?

@loicdtx did u got any solution for this problem

@Arun_Mohan, validation loss is just there to control for overfit during training; it has no analytical value. It’s therefore completely fine to compute it like I did in the original post (model in train mode and gradient deactivated).

1 Like

@loicdtx thanks…Even I tried the same way. I think it is not an issue.