Hi,
I have tried creating a TensorRT engine from an ONNX exported file from pyTorch 1.0 and failed.
II exported a model to ONNX from pytorch 1.0, and tried to load it to tensorRT using:
def build_engine_onnx(model_file): with trt.Builder(TRT_LOGGER) as builder, builder.create_network() as network, trt.OnnxParser(network, TRT_LOGGER) as parser: builder.max_workspace_size = common.GiB(1) # Load the Onnx model and parse it in order to populate the TensorRT network. with open(model_file, 'rb') as model: parser.parse(model.read()) return builder.build_cuda_engine(network) build_engine_onnx(model_path)
I have received the following error:
ERROR: Network must have at least one output.
When I tried to print the model, I get:
output { name: "output1" type { tensor_type { elem_type: FLOAT shape { dim { dim_value: 1 } dim { dim_value: 17 } dim { dim_value: 56 } dim { dim_value: 64 } } } } } } opset_import { version: 9 }
tensorRT supports opset 7. Is there a way to save the model in opset 7? The converter in onnx utilities crash.