How can I compile scripted module with TensorRT?
I read this pytorch docs Using Torch-TensorRT Directly From PyTorch — Torch-TensorRT master documentation
and followed the instruction.
made the script_model using torch.jit.script
and put the same spec to
spec = {
"forward":
torch_tensorrt.ts.TensorRTCompileSpec({
"inputs": [torch_tensorrt.Input([1, 3, 300, 300])],
"enabled_precisions": {torch.float, torch.half},
"refit": False,
"debug": False,
"device": {
"device_type": torch_tensorrt.DeviceType.GPU,
"gpu_id": 0,
"dla_core": 0,
"allow_gpu_fallback": True
},
"capability": torch_tensorrt.EngineCapability.default,
"num_min_timing_iters": 2,
"num_avg_timing_iters": 1,
})
}
trt_model = torch._C._jit_to_backend("tensorrt", script_model, spec)
However, it made the input type error.
Input should have received that torch_tensorrt.Input but it received the whole dictionary as input. So I changed the dictionary format into
spec = {
"forward":
torch_tensorrt.ts.TensorRTCompileSpec(
inputs = [torch_tensorrt.Input((10,1,19,500))],
enabled_precisions = {torch.float},
refit = False,
debug = False,
device = {
"device_type": torch_tensorrt.DeviceType.GPU,
"gpu_id": 0,
"dla_core": 0,
"allow_gpu_fallback": True
},
capability = torch_tensorrt.EngineCapability.default,
num_min_timing_iters= 2,
num_avg_timing_iters= 1,
)
}
and now torch_tensorrt.ts.TensorRTCompileSpec function could read input well.
However, still it gives the following error.
trt_model = torch._C._jit_to_backend("tensorrt", model, spec)
RuntimeError: [Error thrown at /workspace/Torch-TensorRT/py/torch_tensorrt/csrc/tensorrt_backend.cpp:69] Expected core::CheckMethodOperatorSupport(mod, it->key().toStringRef()) to be true but got false
Method forwardcannot be compiled by Torch-TensorRT
I am following the tutorial docs and checked that the model was scripted well.
How can I solve this problem?
Also, when I tried to convert the pytorch model with torch_tensorrt.compile() directly.
It made segmentation fault error.