Pytorch C++ Export to ONNX

Hi,

I’m using PyTorch C++ in a high performance embedded system. I was able to create and train a custom model, and now I want to export it to ONNX to bring it into NVIDIA’s TensorRT. I found an example on how to export to ONNX if using the Python version of PyTorch, but I need to avoid Python if possible and only stick with PyTorch C++. Here’s the Python code snippet:

dummy_input = torch.randn(1, 3, 224, 224, device=‘cuda’)
input_names = [ “input” ]
output_names = [ “output” ]
torch.onnx.export(model, dummy_input, “my_model.onnx”, verbose=True, input_names=input_names, output_names=output_names)

How do I do this using the PyTorch C++ API?
If this ability does not work, is there a simple way to convert the output of:

torch::save(network, “network_and_weights.pt”);

to an ONNX file without needing to use the world of Python?

Thank you for any guidance.

2 Likes

To expand on this question, In C++ I can save and load an entire model with parameters using:

torch::save(network, “network_and_weights.pt”);
torch::load(network, “network_and_weights.pt”);

But there does not seem to be a way to save and load just the weights in C++. In Python you can do this with:

torch.save(model.state_dict(), “weights.pt”)
model.load_state_dict(torch.load(“weights.pt”))
The only thing I found in C++ that might be how to save is:

torch::save(network->parameters(), “weights.pt”);

But there is no comparable C++ function to load… I was hoping for something like:

network->load_parameters(torch::load(“weights.pt”));

Saving and loading just the weights seems like such a fundamental aspect of using a framework, and being able to export to ONNX also seems fundamental. I feel like I’m missing something obvious. Is there anyone out there who can help explain how to save/load weights and export to ONNX using C++?

Thanks

1 Like

With all the fanfare of PyTorch, LibTorch, & ONNX, I’m surprised I’m not seeing any responses to this discussion. Day after day I just keep digging.

It’s counter-intuitive that the PyTorch and LibTorch have different file formats for the same torch functions save() and load(). Just provide an analogy… can you imagine if a image company writing functions for python saveJPEG() and the C++ saveJPEG() saved to completely different incompatible file formats? I’m not trying to criticize, I’m trying to show how the development community is being confused, in a way that causes days or weeks of lost time and money.

Anyways, my goal is to train in LibTorch C++ (PyTorch is not an option for training with our embedded customers) and somehow save it to ONNX so I can use it with NVIDIA’s TensorRT. Since it sounds like there’s no activity with implementing LibTorch’s torch::onnx::export(), and there’s no activity to make save() & load() interoperable between PyTorch and LibTorch, then my only option is to try various hacks, as you can see from the posts above.

Another hack was to try (in C++):

  torch::Tensor dummy_input = torch::randn({1, 3, 224, 224});
  dummy_input.to(torch::kCUDA);
  auto traced_script_module = torch::jit::trace(model, dummy_input);
  traced_script_module.save("traced_model.pt");

and then import it into a simple PyTorch script to convert to ONNX:

import torch
import torchvision
model = torch.jit.load("traced_model.pt")
model = model.cuda()
model.eval()
input_names = [ "input" ]
output_names = [ "output" ]
dummy_input = torch.randn(1, 3, 224, 244, device='cuda')
dummy_output = torch.rand((1, 102), device='cuda')
torch.onnx.export(model, dummy_input, "resnet18_3x256x256.onnx", verbose=True, input_names=input_names, output_names=output_names, example_outputs=dummy_output)

But torch::jit::trace() is not implemented in LibTorch, and I don’t see any activity indicating it will be worked on.

Am I the only person out there who needs to train in C++, export to ONNX, and import to TensorRT? If not, could you please voice yourself so the LibTorch developers can prioritize this a little higher.

1 Like

I had the same expectations for the Pytorch -> ONNX -> TRT export. The day I realized I had to pass trough Python in order to export to ONNX, was the last day of me using Pytorch. If we have to build communities to beg the devs and wait for something so fundamental - it’s beyond the joke.

1 Like

I need this feature too.

1 Like

I need this feature too. :sob:

1 Like

I need this feature too.

1 Like

torch::jit::tracer::trace()和torch::jit::export_onnx()这两个接口也许可以解决这个问题。但是到目前为止,我也不知道怎么使用。

1 Like

From Google translate:

The two interfaces torch::jit::tracer::trace() and torch::jit::export_onnx() may be able to solve this problem. But so far, I don’t know how to use it.

1 Like

I need this feature too!

1 Like

Same here ! I need this feature too

1 Like

I need this feature too!

1 Like

seems until now still no solution for libtorch trained model transfer to onnx or even to torchscript model(libtorch saved model is not exactly same as torchscript model export from python)?

1 Like

i need this feature too! is anyone solved the problem?

I’m not aware of anyone working on this feature, but am sure it would be accepted, so maybe you and/or @ximitiejiang, @allinall, and @birds_are_drones would be interested in adding this feautre?

I think this would be a hard work to replace all symbolic aten(torch) to onnx operator function which is totally implemented in python.(see symbolic_opset*.py in pytorch/torch/onnx at master · pytorch/pytorch · GitHub)
torch.onnx.export is a PyTorch’s feature, not libtorch’s feature