Can save optimized_model created with torch_tensorrt.compile

I’m using torch.tensorrt.compile to inference models, the runtime is working great. However I would like to save the compiled model and use next time without having to go through compile again.

model = models.resnet50(pretrained=True).eval()
At the end of the compile I get this information
INFO optimized model type
<class ‘torch._dynamo.eval_frame.OptimizedModule’>
How do I save this? The IR I’m is ir="torch_complie(only one that works) not dynamo.

I don’t know if the experimental torch.export util. supports TorchTRT, but @narendasan might know.

@ptrblck Thank you for help I tried the export approach but with torch_tensorrt.dynamo.export (torch.export does not support TorchTRT) but it I got the following exception with that approach:
[ERROR] Preprocessing failed: Detected that you are using FX to torch.jit.trace a dynamo-optimized function. This is not supported at the moment.

In general you cannot serialize torch.compile outputs. torch.export is the right tool for this. The error you saw may be from a previous version of torch-tensorrt. Please take a look at this section of the docs Saving models compiled with Torch-TensorRT — Torch-TensorRT v2.3.0.dev0+85971ff documentation