What is the relationship between Torch Script and TensorRT

Is possible to use torch2trt to convert trained model to TensorRT,then convert the result to TorchScript?

Usually TensorRT would be the last deployment stage.
What is your use case that you would like to transform a TensorRT model back to TorchScript?

We have a c++ based system,and I am not sure TensorRT-ed model can work with C++

TensorRT is a C++ library (with a Python interface) as described here, so your model should work in C++.

That will be fine,thank you

1 Like

@ptrblck So when would TorchScript be used if not as a deployment solution? From my understanding most do training in Python but may want to run an optimized inference in another language like C++. I thought this was the function of TorchScript (to run an inference in C++), though I definitely see overlap with TensorRT. I believe that TensorRT has many more optimizations for runtime inference. But there may also be some things that TorchScript can do over TensorRT that I am not aware of. I’m still not 100% sure when to use one over the other.

A scripted model could still be used for training, no?
The major use case might be inference, but I also see the advantage of “porting” the model from Python to C++, where fine tuning the model would also be possible.

Yes, TensorRT is an inference engine and you could use it to speed up your inference further.
Have a look at this blog post for more information.

1 Like