I have consulted with you several times in the Pytorch Forum.
I am building a custom Faster R-CNN model which outputs object’s boundary box, label, and additional attributes for a project, but I am stucking.
ValueError Traceback (most recent call last)
C:\Users\TANABE~1\AppData\Local\Temp/ipykernel_19556/2267599829.py in <module>
17
18
---> 19 loss_dict = model(x1, x2)
20
21 losses = sum(loss for loss in loss_dict.values())
C:\anaconda\envs\pytorch-gpu\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1101 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102 return forward_call(*input, **kwargs)
1103 # Do not call functions when jit is used
1104 full_backward_hooks, non_full_backward_hooks = [], []
C:\Users\TANABE~1\AppData\Local\Temp/ipykernel_19556/3309378347.py in forward(self, images, targets)
84
85 proposals, proposal_losses = self.rpn(images, features, targets)
---> 86 detections, detector_losses, leaf_age_losses = self.roi_heads(features, proposals, images.image_sizes, targets)
87 detections = self.transform.postprocess(detections, images.image_sizes, original_image_sizes)
88
C:\anaconda\envs\pytorch-gpu\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1101 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102 return forward_call(*input, **kwargs)
1103 # Do not call functions when jit is used
1104 full_backward_hooks, non_full_backward_hooks = [], []
C:\anaconda\envs\pytorch-gpu\lib\site-packages\torchvision\models\detection\roi_heads.py in forward(self, features, proposals, image_shapes, targets)
752 box_features = self.box_roi_pool(features, proposals, image_shapes)
753 box_features = self.box_head(box_features)
--> 754 class_logits, box_regression = self.box_predictor(box_features)
755
756 result: List[Dict[str, torch.Tensor]] = []
ValueError: too many values to unpack (expected 2)
Looking at the error statement above, I figured it was happening because the default RoIHead received more values than expected.However, it is not smart to rewrite the torchvision code itself.
Could you please give me some good solutions?
Thank you.