What I want to do: TensorRT optimization of a PyTorch trained model which was previously saved as a torchscript-Model.
But after calling:
auto trt_mod = torch_tensorrt::torchscript::compile(module, compile_settings);
… the process gets stuck in an infinite(?) loop. I can also observe that the GPU load drops back to 0% after about 1s.
According to this link: fix: fix compilation stuck bug caused by elimination exception by bowang007 · Pull Request #1409 · pytorch/TensorRT · GitHub the issue should already be solved.
Version:
- Torch-TensorRT v1.3.0
- PyTorch 1.13
- CUDA 11.7
- TensorRT 8.5
- cudnn 8.5