How to save PyTorch model in caffe2 format?

I failed to properly load model in ONNX format possibly due to lack of support of 0.0.3 ONNX IR in TensorRT. So I wounder is there esiest way to dump PyTorch model in caffe2 format?

Hi,

They are quite different libraries, so I don’t think you can do the transfer without onnx I’m afraid.

Somehow I have caffe2 library, I also “understand” how to generate pb (I am using convert-onnx-to-caffe2 tool) files but I do not know how to use these files in TensorRT parser

Sorry what I meant above was that you cannot transfer from pytorch to caffe2 without onnx.
For TensorRT, have you checked the nvidia converter such as this ?

I did not see this tool yet. Can I dump TensorRT engine and later load it in C++ code?

I’m afraid I don’t know. It might be mentioned on the repo?
cc @ptrblck

If you can convert your model using this repo, you should be able to export the engine using:

with open('model.engine', 'wb') as f:
    f.write(model_trt.engine.serialize())

I tried solution you offer to me and must claim that it has its own new and enchanting issues. Nevertheless @albanD, @ptrblck thank you. I hope this tool will help me to transfer my model to TRT.