Unable to compile model in torch_tensorrt

Hi team, I have built the object detection model using torchvision fasterrcnn model. I need to deploy this model in Nvidia Triton server, so I’m trying to compile the model using torch_tensorrt but its failing.

@ptrblck Could you please help me to compile the model?

Please find the versions of each model below,

OS : ubuntu 20.04
Python : 3.10.8
torch version : 1.13.1
tensorrt version : 8.5.2.2
torch_tensorrt version : 1.3.0

Please find the script below,

import torch, tensorrt, torch_tensorrt,torchvision
device = 'cuda' if torch.cuda.is_available() else 'cpu'
model = torchvision.models.detection.fasterrcnn_mobilenet_v3_large_320_fpn().eval()

trt_module = torch_tensorrt.compile(model,
    inputs = [torch_tensorrt.Input((1, 3, 720, 1280))], # input shape   
    enabled_precisions = {torch.half} # Run with FP16
)
# save the TensorRT embedded Torchscript
torch.jit.save(trt_module, "trt_torchscript_module.ts")

its throwing error, please find the error below,

RuntimeError                              Traceback (most recent call last)
Cell In[3], line 1
----> 1 trt_module = torch_tensorrt.compile(model,
      2     inputs = [torch_tensorrt.Input((1, 3, 720, 1280))], # input shape   
      3     enabled_precisions = {torch.half} # Run with FP16
      4 )
      5 # save the TensorRT embedded Torchscript
      6 torch.jit.save(trt_module, "trt_torchscript_module.ts")

File ~/miniconda3/envs/tensorrt/lib/python3.10/site-packages/torch_tensorrt/_compile.py:125, in compile(module, ir, inputs, enabled_precisions, **kwargs)
    120         logging.log(
    121             logging.Level.Info,
    122             "Module was provided as a torch.nn.Module, trying to script the module with torch.jit.script. In the event of a failure please preconvert your module to TorchScript",
    123         )
    124         ts_mod = torch.jit.script(module)
--> 125     return torch_tensorrt.ts.compile(
    126         ts_mod, inputs=inputs, enabled_precisions=enabled_precisions, **kwargs
    127     )
    128 elif target_ir == _IRType.fx:
    129     if (
    130         torch.float16 in enabled_precisions
    131         or torch_tensorrt.dtype.half in enabled_precisions
    132     ):
...

RuntimeError: 
temporary: the only valid use of a module is looking up an attribute but found  = prim::SetAttr[name="_has_warned"](%self, %self.backbone.body.1.use_res_connect)

#tensorrt #torch_tensorrt #compile #jit

CC @narendasan as a core dev of TorchTRT

@iamexperimentingnow More information can be found by enabling debugging logging with

with torch_tensorrt.logging.debug():
    trt_module = torch_tensorrt.compile(...

Though it seems like you are trying to compile a detection network which may have issues due to the use of input dependent shapes. One option to try is setting ir="fx" to use the fx frontend which has better success with detection networks.

@narendasan , thanks for your reply. I have put the github page, here is the link 🐛 [Bug] Unable to compile the model using torch tensorrt · Issue #1565 · pytorch/TensorRT · GitHub

import torch, tensorrt, torch_tensorrt, torchvision
example = torch.rand(1, 3, 720, 1280, requires_grad=True) 
model = torchvision.models.detection.fasterrcnn_mobilenet_v3_large_320_fpn().eval()
with torch_tensorrt.logging.debug():
    trt_module = torch_tensorrt.compile(model,ir="fx",
        inputs = [example], # input shape   
        enabled_precisions = {torch.half} # Run with FP16
    )
# save the TensorRT embedded Torchscript
torch.jit.save(trt_module, "trt_torchscript_module.ts")

it didn’t solve , please error below,

TraceError: Proxy object cannot be iterated. This can be attempted when the Proxy is used in a loop or as a *args or **kwargs function argument. See the torch.fx docs on pytorch.org for a more detailed explanation of what types of control flow can be traced, and check out the Proxy docstring for help troubleshooting Proxy iteration errors

@narendasan can you give it a go once? Since, the model is used from torchvision module it will be easy to replicate.

Did you get a chance to reproduce it?