How do I retrieve the f1 score in object detection (faster RCNN)?

Hi,

I’m new to object detection so just getting my head around a few things. I followed the tutorial provided by PyTorch on object detection where the main metric for evaluation was mean average precision (mAP).

I’ve added extra bits and will apply a non-maximum suppression to clear up the number of bounding box predictions. However, it is still unclear to me on how I can determine the overall f1-score when evaluating the full dataset.

Currently, once I have trained the model (faster RCNN) I evaluate like this:

cpu_device = torch.device("cpu")
model.eval()
with torch.no_grad():
  for images, targets in valid_data_loader:
    images = list(img.to(device) for img in images)
    outputs = model(images)
    outputs = [{k: v.to(cpu_device)for k, v in t.items()} for t in outputs]

However, I’m unsure of how I can use this to determine false positives, true positives, false negatives and true negatives to then determine the overall f1-score.

I found this one line on the GitHub repo for the tutorial:

res = {target["image_id"].item(): output for target, output in zip(targets, outputs)}

where printing out the output (batch size of 3) gives me this:

{0: {'boxes': tensor([[434.2975, 191.6537, 467.0199, 224.7192],
        [ 31.4918, 354.0607,  62.8257, 386.2621],
        [469.9614,  86.2549, 501.9221, 117.5876],
        [215.4281, 481.5853, 247.2938, 511.3138],
        [ 90.5871, 313.2582, 123.1587, 345.9512],
        [331.6597, 341.3073, 364.3313, 375.1403]], device='cuda:0'), 'labels': tensor([1, 1, 1, 1, 1, 1], device='cuda:0'), 'scores': tensor([0.9648, 0.9629, 0.9624, 0.9151, 0.7567, 0.2403], device='cuda:0')}, 1: {'boxes': tensor([[1.5194e+02, 1.0456e+02, 1.8351e+02, 1.3742e+02],
        [3.8975e+02, 2.0607e+02, 4.2221e+02, 2.3842e+02],
        [3.0471e+02, 3.0989e+02, 3.3620e+02, 3.4213e+02],
        [1.3239e+01, 1.1389e+02, 4.4452e+01, 1.4622e+02],
        [2.9837e+02, 4.5101e+02, 3.3001e+02, 4.8375e+02],
        [4.5444e+02, 3.5601e-02, 4.8779e+02, 2.5613e+01]], device='cuda:0'), 'labels': tensor([1, 1, 1, 1, 1, 1], device='cuda:0'), 'scores': tensor([0.9767, 0.9669, 0.9590, 0.8096, 0.7546, 0.3785], device='cuda:0')}, 2: {'boxes': tensor([[248.2404, 110.2581, 279.5806, 141.3378],
        [ 29.6403, 150.2647,  64.2155, 182.6650],
        [178.2941, 176.4075, 210.9212, 208.9970],
        [362.3374,  93.1001, 396.6398, 125.1960],
        [360.2594,  65.7590, 393.0970,  99.8704]], device='cuda:0'), 'labels': tensor([1, 1, 1, 1, 1], device='cuda:0'), 'scores': tensor([0.9661, 0.8858, 0.8422, 0.6669, 0.1381], device='cuda:0')}}

I’m not really sure how I can use this information to get each classification metric and to determine an f1 score…

Hi,
I am having a similar issue as you. Were you able to ever solve said issue? I have been trying to create a IoU score using similar code as you and am having trouble. Just wanted to see if you came up with a solution.

Thank you