How to use TensorRT backend to accelerate TorchScript

Can we directly use TensorRT to accelerate TorchScript without ONNX?

You could try to use tensor2trt to directly use TensorRT with the Python API. However, as far as I know, the supported methods might be limited, so you would have to check, if your current model uses only supported layers.

A bit unrelated to your question, but might also be interesting: you could have a look ar this blogpost to see how to deploy Tacotron2 and Waveglow to TensorRT7.

I will check it, thank you very much:)