Trying to compile out of the box yolov5 into Tensorrt Model: What am I missing?

I am using this code to create a Tensorrt model from a loaded yolov5 model. I use these lines:

# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True).eval()
inputs =[torch_tensorrt.Input([1, 3, 300, 300])]
enabled_precisions = [torch.half]

trt_model = torch_tensorrt.compile(model, inputs=inputs, enabled_precisions=enabled_precisions)

and get this error:

    raise NotSupportedError(ctx_range, _vararg_kwarg_err)
torch.jit.frontend.NotSupportedError: Compiled functions can't take variable number of arguments or use keyword-only arguments with defaults:
  File "/home/zeloada/.conda/envs/ze-tiling/lib/python3.10/", line 477
    def __exit__(self, *exc_info):
                       ~~~~~~~~~ <--- HERE
        if not self._entered:
            raise RuntimeError("Cannot exit %r without entering first" % self)
'__torch__.warnings.catch_warnings' is being compiled since it was called from 'SPPF.forward'
  File "/home/zeloada/.cache/torch/hub/ultralytics_yolov5_master/models/", line 229
    def forward(self, x):
        x = self.cv1(x)
        with warnings.catch_warnings():
             ~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
            warnings.simplefilter('ignore')  # suppress torch 1.9.0 max_pool2d() warning
            y1 = self.m(x)

new to tensorrt generally and not sure how to debug. any help would be greatly appreciated.

CC @narendasan in case you want to take a look at it.