Calculate training accuracy if model output is a dict of loss values


Im very new to machine learning and I found quite an interesting implementation of a Faster-RCNN with resnet101 from torchvision.
While tinkering around, I tried to calculate the accuracy from the output during training.
Problem is, the model only returns a dictionary composed of loss values:

    for batch_index, (images, targets) in enumerate(train_data_loader):
        # move the images and targets to device
        images = list( for image in images)
        targets = [{k: for k, v in t.items()} for t in targets]
        loss_dict = model(images,targets)
        loss = sum(loss for loss in loss_dict.values())


    backbone = resnet_fpn_backbone('resnet152', pretrained=True)
    model = FasterRCNN(backbone, num_classes=2)

loss_dict output:

{'loss_classifier': tensor(0.6970, device='cuda:0', grad_fn=<NllLossBackward>), 'loss_box_reg': tensor(0.1849, device='cuda:0', grad_fn=<DivBackward0>), 'loss_objectness': tensor(0.6929, device='cuda:0', grad_fn=<BinaryCrossEntropyWithLogitsBackward>), 'loss_rpn_box_reg': tensor(0.3251, device='cuda:0', grad_fn=<DivBackward0>)}

How do I calculate accuracy from this?

Anyone can point me in the right directions?