PyTorch 1.0 or NVIDIA TensorRT?

PyTorch 1.0 is now offering optimizations for production deployment.

NVIDIA TensorRT also offers optimizations for production deployment. From https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html
"It includes parsers for importing existing models from Caffe, ONNX, or TensorFlow, and C++ and Python APIs for building models programmatically.’’

It seems, TensorRT does not support PyTorch models yet.

If we develop in PyTorch, it is of course preferable to do everything (training & production deployment) in PyTorch.

Question: In terms of deployment, which one should be preferred? PyTorch or TensorRT? Is there any optimization that TensorRT is doing better than PyTorch?

1 Like

There is some activity in Caffe2 but I don’t know what is the PyTorch 1.0 plan.

You can export pytorch models into onnx format and then use it wit tensorrt.

when using onnx2tensorrt on github to convert, there is segment fault. is there any demo about the convert process from pth to trt model file?