Mask rcnn *.pt file causes exception

Hello, I successfully trained standard mask rcnn model from Pytorch (example based on the book: computer vision with Pytorch).
I used next code in order to create pt file:

sample = torch.rand(1, 3, 640, 480).to(device)
model = model.to(device)
scripted_model = torch.jit.script(model, sample)
scripted_model.save('my_script.pt')

I can succesfully load model in python, using code below:

model = torch.jit.load("my_script.pt", map_location='cuda')
model.eval()

But when i try to load model in c++ i get the exception:

    torch::jit::script::Module module;
    try
    {
      auto device = torch::kCUDA;
      // Deserialize the ScriptModule from a file using torch::jit::load().
      module = torch::jit::load("my_script.pt", device);
    }
    catch (const c10::Error& e)
    {
      std::cerr << "error loading the model\n";
      return -1;
    }

Where i’m wrong ?

I have got additional diagnostic information:

terminate called after throwing an instance of ‘torch::jit::ErrorReport’
what():
Unknown builtin op: torchvision::nms.
Could not find any similar ops to torchvision::nms. This op may not exist or may not be currently supported in TorchScript.
:
File “code/torch/torchvision/ops/boxes.py”, line 138
_59 = torch.torchvision.extension._assert_has_ops
_60 = _59()
_61 = ops.torchvision.nms(boxes, scores, iou_threshold)
~~~~~~~~~~~~~~~~~~~ <— HERE
return _61
‘nms’ is being compiled since it was called from ‘_batched_nms_vanilla’
File “/home/user/.local/lib/python3.10/site-packages/torchvision/ops/boxes.py”, line 109
for class_id in torch.unique(idxs):
curr_indices = torch.where(idxs == class_id)[0]
curr_keep_indices = nms(boxes[curr_indices], scores[curr_indices], iou_threshold)
~~~ <— HERE
keep_mask[curr_indices[curr_keep_indices]] = True
keep_indices = torch.where(keep_mask)[0]
Serialized File “code/torch/torchvision/ops/boxes.py”, line 83
_31 = torch.index(boxes, _30)
_32 = annotate(List[Optional[Tensor]], [curr_indices])
curr_keep_indices = torch.torchvision.ops.boxes.nms(_31, torch.index(scores, _32), iou_threshold, )
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <— HERE
_33 = annotate(List[Optional[Tensor]], [curr_keep_indices])
_34 = torch.index(curr_indices, _33)
‘_batched_nms_vanilla’ is being compiled since it was called from ‘batched_nms’
Serialized File “code/torch/torchvision/ops/boxes.py”, line 35
idxs: Tensor,
iou_threshold: float) → Tensor:
_9 = torch.torchvision.ops.boxes._batched_nms_vanilla

_10 = __torch__.torchvision.ops.boxes._batched_nms_coordinate_trick
_11 = torch.numel(boxes)
'batched_nms' is being compiled since it was called from 'RegionProposalNetwork.filter_proposals'
Serialized   File "code/__torch__/torchvision/models/detection/rpn.py", line 72
  _11 = __torch__.torchvision.ops.boxes.clip_boxes_to_image
  _12 = __torch__.torchvision.ops.boxes.remove_small_boxes
  _13 = __torch__.torchvision.ops.boxes.batched_nms
  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
  num_images = (torch.size(proposals))[0]
  device = ops.prim.device(proposals)
'RegionProposalNetwork.filter_proposals' is being compiled since it was called from 'RegionProposalNetwork.forward'
File "/home/user/.local/lib/python3.10/site-packages/torchvision/models/detection/rpn.py", line 372
      proposals = self.box_coder.decode(pred_bbox_deltas.detach(), anchors)
      proposals = proposals.view(num_images, -1, 4)
      boxes, scores = self.filter_proposals(proposals, objectness, images.image_sizes, num_anchors_per_level)
                      ~~~~~~~~~~~~~~~~~~~~~ <--- HERE
  
      losses = {}
Serialized   File "code/__torch__/torchvision/models/detection/rpn.py", line 43
  proposals0 = torch.view(proposals, [num_images, -1, 4])
  image_sizes = images.image_sizes
  _8 = (self).filter_proposals(proposals0, objectness0, image_sizes, num_anchors_per_level, )
                                                                     ~~~~~~~~~~~~~~~~~~~~~ <--- HERE
  boxes, scores, = _8
  losses = annotate(Dict[str, Tensor], {})

I added diagnostic information in notebook:
print(torch.version)
print(torchvision.version)
print(torch_snippets.version)
result is:
2.2.2+cu121

0.17.2+cu121

0.528