Torchscript FasterRCNN model is not saving using _save_for_lite_interpreter

System and Packages
CUDA version: 11.8
PyTorch version: 2.0.1+cu118
cuDNN version: 8700
Python 3.10.11
OS: Windows

I am trying to use FasterRCNN model on android. To do that I am trying to convert my pytorch model into torchscript using _save_for_lite_interpreter. I get this error with all object detection models. Here is my code:

import torch
import torchvision
from torch.utils.mobile_optimizer import optimize_for_mobile

model = torchvision.models.detection.fasterrcnn_resnet50_fpn_v2()
model.eval()
script_model = torch.jit.script(model)
script_model.eval()
optimized = optimize_for_mobile(script_model)
optimized._save_for_lite_interpreter('test_model.ptl')

I get error on the last line.The error is as followed

Exception has occurred: RuntimeError
__torch__ types other than custom c++ classes (__torch__.torch.classes)are not supported in lite interpreter. Workaround: instead of using arbitrary class type (class Foo()), define a pytorch class (class Foo(torch.nn.Module)). The problematic type is: __torch__.torchvision.models.detection.image_list.ImageList
  File "U:\workspace\ai_pipline\FasterRCNN\test.py", line 11, in <module>
    optimized._save_for_lite_interpreter('test_model.ptl')
RuntimeError: __torch__ types other than custom c++ classes (__torch__.torch.classes)are not supported in lite interpreter. Workaround: instead of using arbitrary class type (class Foo()), define a pytorch class (class Foo(torch.nn.Module)). The problematic type is: __torch__.torchvision.models.detection.image_list.ImageList

I am adding a picture as well to make it clear

I would really appreciate if someone can help me out. Thanks