When running losses.backprop error: Found dtype Double but expected Float error even though data is all float

When trying to rerun my torchvision model of the mobilenet, it stopped working all of a sudden with a runtime error. I spend an hour trying earlier versions of my code that I knew worked and it got this new error. I also tried to debug the error but I can’t find the problem in my code.
The error is (Found dtype Double but expected Float error)
However all of my data is a float dtype as I checked and the model is also a float dtype. I also tried converting the model and data to double and this new weird error still persists.
Also, the error seems to happen when I call backprop on the losses, but the losses are still dtype float not double, so this new error makes no sense to me.

My model declaration is:

backbone = torchvision.models.detection.fasterrcnn_mobilenet_v3_large_320_fpn(pretrained=True)
backbone.roi_heads.box_predictor.cls_score.out_features = len(classes) 
backbone.roi_heads.box_predictor.bbox_pred.out_features = 4 * (len(classes))

And the main part of my dataclass is

img_path = ffile_path(self.imgs_key[idx], self.full_image_file_paths) 
      boxes = convert_min_max(torch.as_tensor(self.id_bounding_boxes[self.imgs_key[idx]], dtype=torch.float32))
      labels = torch.as_tensor(self.id_labels[self.imgs_key[idx]], dtype=torch.int64)
    
    img = cv2.cvtColor(cv2.imread(img_path, cv2.IMREAD_COLOR), cv2.COLOR_BGR2RGB).astype(np.float32) / 255.0

    image_id = torch.tensor([idx])
    area = find_area_bb(boxes)

    target = {}
    target["boxes"] = boxes
    target["labels"] = labels
    target["image_id"] = image_id
    target["area"] = area
    
    #Query about transforms for labels of images
    if self.transforms: 
      sample = {
                'image': img,
                'bboxes': target['boxes'],
                'labels': labels
            }

      sample = self.transforms(**sample)
      img = sample['image']

      if img_key not in self.noisy_fp:
        target['boxes'] = torch.stack(tuple(map(torch.tensor, zip(*sample['bboxes'])))).permute(1, 0)
    
    
    return img, target

Also, this is the entire traceback error. As you can see it seems to not relate to my input variables.

<ipython-input-106-1ca8faa2fbb0> in <module>()
----> 1 another_one_1 = train(backbone, 20, train_loader, test_loader, noise_loader, 0.0005, weight_decay = 1e-3, print_every = 60)

2 frames
<ipython-input-104-f45676a52fc3> in train(net, epochs, train_loader, test_loader, noise_loader, lr, weight_decay, print_every, lo_test_dataset, lo_train_dataset, lo_noise_dataset)
     42             net.train()
     43 
---> 44             losses.backward()
     45             # optimizer.step()
     46 

/usr/local/lib/python3.7/dist-packages/torch/_tensor.py in backward(self, gradient, retain_graph, create_graph, inputs)
    253                 create_graph=create_graph,
    254                 inputs=inputs)
--> 255         torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
    256 
    257     def register_hook(self, hook):

/usr/local/lib/python3.7/dist-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)
    147     Variable._execution_engine.run_backward(
    148         tensors, grad_tensors_, retain_graph, create_graph, inputs,
--> 149         allow_unreachable=True, accumulate_grad=True)  # allow_unreachable flag
    150 
    151 

RuntimeError: Found dtype Double but expected Float 

I was hoping someone could help me with this
Thanks in advance.

Hi,
You can use Automatic differentiation package - torch.autograd — PyTorch 1.9.0 documentation to help narrow down which forward op might have caused the issue.