I’m using torch.tensorrt.compile to inference models, the runtime is working great. However I would like to save the compiled model and use next time without having to go through compile again.
model = models.resnet50(pretrained=True).eval()
At the end of the compile I get this information
INFO optimized model type
<class ‘torch._dynamo.eval_frame.OptimizedModule’>
How do I save this? The IR I’m is ir="torch_complie(only one that works) not dynamo.
@ptrblck Thank you for help I tried the export approach but with torch_tensorrt.dynamo.export (torch.export does not support TorchTRT) but it I got the following exception with that approach:
[ERROR] Preprocessing failed: Detected that you are using FX to torch.jit.trace a dynamo-optimized function. This is not supported at the moment.