Converting pytorch model to onnx, torch.onnx.export

I’m trying to convert a pytorch model, .pt file, to an onnx model for later conversion to TensorRT engine. The error I’m getting, which does not even make sense anymore is
Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper__index_select).
Of course I have transferred my model to “cuda” with .cuda() and also the sample data with the same function to the same device. When printing all layers of the model to check device, with model.parameters().device they are all on cuda. I’m completely helpless, can not figure out why it gives the error like it does. On a side note, I can convert it with do_constant_folding=False, but the model turns out all wrong.

When do_constant_folding=False one interesting thing it outputs (among soooo many other messages) is:
(function ComputeConstantFolding)
graph(%images : Half(1, 3, 320, 192, strides=[184320, 61440, 192, 1], requires_grad=0, device=cuda:0),
%model.0.conv.weight : Half(16, 3, 6, 6, strides=[108, 36, 6, 1], requires_grad=0, device=cuda:0),
And this goes on for all the layers in the model, and they all have requires_grad=0, device=cuda:0.
Oh great pytorch god @ptrblck, please help!

Can you please show the code you are using to covert the model to onnx, that might be very helpful to help you out. On a side note, trying using this:

device="cuda"
backbone = model()
backbone.load_state_dict(torch.load("PATH_TO_MODEL", map_location = torch.device(device)))
backbone.to(device)
backbone.eval()

# Export the model
inp = torch.rand(1, 3, 256, 256).to(device)
torch_out = torch.onnx._export(backbone, inp, "final_backbone.onnx", export_params=True)

Thanks for the reply @azhanmohammed
It’s quite a bit of code to paste, but I will detail the most relevant.
Load model,
ckpt = torch.load(w, map_location=map_location) # load
where
map_location=device=torch.device(0)

im = torch.zeros(1, 3, 640, 640).to(device)

im, model = im.half(), model.half()

model.eval()

Then I export it like:

torch.onnx.export(model, im, f, verbose=False, opset_version=opset,

                      training=torch.onnx.TrainingMode.TRAINING if train else torch.onnx.TrainingMode.EVAL,

                      do_constant_folding=not train,

                      input_names=['images'],

                      output_names=['output'],

                      dynamic_axes={'images': {0: 'batch', 2: 'height', 3: 'width'},  # shape(1,3,640,640)

                                    'output': {0: 'batch', 1: 'anchors'}  # shape(1,25200,85)

                                    } if dynamic else None)

Opset is either 12 or 13, ad both train and dynamic are false.

Did the code I attach above work? In case it did not, another way you can try is:

map_location = "cuda"
device = "cuda"
ckpt = torch.load(w, map_location=torch.device(map_location)) # load

im = torch.zeros(1, 3, 640, 640).to(device)

im, model = im.half(), model.half()

model.eval()

torch.onnx.export(model,               # model being run
                  im,                         # model input (or a tuple for multiple inputs)
                  "output_model.onnx",   # where to save the model (can be a file or file-like object)
                  export_params=True,        # store the trained parameter weights inside the model file
                  opset_version=12,          # the ONNX version to export the model to
                  do_constant_folding=True,  # whether to execute constant folding for optimization
                  input_names = ['input'],   # the model's input names
                  output_names = ['output'], # the model's output names
                 )

This should work as well.

Solved. This post can be closed, there is something with opset=13 that produces the wrong model, opset=12 however does the trick!

@hemma Can you please help me regarding this similar issue? I’m trying to convert the pytorch model (text to speech) to ONNX, and having the similar error. I have tried lowering the opset version, but none of these seem to have worked for me.