Get losses during validation torchvision FasterRCNN model

I trained FasterRCNN model on my problem to a certain extent.
Now I’d like to set model to evaluation mode (model.eval()) to disable some specific layers (like dropout) but I still want to get loss on my data.
If I set model.eval() then I get detection but not losses while if I set model.train() my loss is different every run on the same sample and I dont’t get detections.
Can I do it without modifying the module source code?

I see such training conditions in FasterRCNN submodules:

losses = {}
    assert targets is not None
    labels, matched_gt_boxes = self.assign_targets_to_anchors(anchors, targets)
    regression_targets = self.box_coder.encode(matched_gt_boxes, anchors)
    loss_objectness, loss_rpn_box_reg = self.compute_loss(
        objectness, pred_bbox_deltas, labels, regression_targets)
    losses = {
        "loss_objectness": loss_objectness,
        "loss_rpn_box_reg": loss_rpn_box_reg,
return boxes, losses

[ - forward()]

Does somebody have any adivices on the subject

I think you could call .train() on all submodules via e.g. model.apply() or alternatively try to use:

model.eval() = True

This would also set all submodules to the eval mode and would reset the parent model to the training mode.

Now I get neither losses nor predictions while I need both of them.

Are you getting a return value of Nones now?
I’m unsure about your initial use case, as torchvision.models.detection.fasterrcnn_resnet50_fpn doesn’t seem to contain any dropout layers and uses FrozenBatchNorm layers. Which layers would you like to set to eval mode specifically?

At least this code snippet doesn’t find any dropout layers:

model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)
for name, m in model.named_modules():
    if isinstance(m, nn.Dropout):
        print('setting {} to eval()'.format(name))

I didn’t know about which specific layer effects such behaviour and only knew that it may be Dropout or BatchNormalization or like this cause with the same sample my losses fluctuated a bit every run.

But the main problem that I cannot get predictions and losses simultaneously.
To get predictions I need to set model.eval(), to get losses I need to set model.train() and at the time I run the methods one by one with the same sample to get losses and predictions.

So could I get desired ouput without dual run?

1 Like

Hi. Any update on this? I would like to get the losses during validation as well. Otherwise, I won’t be able to see if the model overfits or not. Any help would be appreciated.