The ONNX network's output 'pred' dimensions should be non-negative

Hello,
I am trained a CycleGAN and converted it to onnx model using part of code below:

net.eval()
net.cuda()  # in this example using cuda()

batch_size = 1  
input_shape = (3, 512, 512) 
export_onnx_file = load_filename[:-4]+".onnx"  
save_path = os.path.join(self.save_dir, export_onnx_file)
input_names = ["image"]
output_names = ["pred"]

dinput = torch.randn(batch_size, *input_shape).cuda()   #same with net: cuda()
torch.onnx.export(net, dinput, save_path,
                  input_names = input_names, output_names = output_names,
                  opset_version = 11,
                  dynamic_axes={
                # 'image' : {0 : 'batch_size'},    # variable length axes 
                'pred' : {
                  0:'1',
                  1:'3',
                  2:'512',
                  3:'512'
                }})#{0 : 'batch_size'}})
  
  
# summary(net, input_shape)
print('The ONNX file ' + export_onnx_file + ' is saved at %s' % save_path)

My goal is use the onnx model in Snap Lens Studio as a lens filter, but everytime I import the model I got the error like this

18:32:04	Resource import for /Users/youjin/Downloads/latest_net_G (6).onnx failed: The ONNX network's output 'pred' dimensions should be non-negative 

I printed the input and output shapes and everything looks fine to me tho,

#the last line of torchsummary
             Tanh-91          [-1, 3, 512, 512]               0
#print(input.shape)
input shape torch.Size([1, 3, 512, 512])
#print(out.shape)
output shape torch.Size([1, 3, 512, 512])

I googled what ‘negative dimensioin’ implies and found this thread.
Initially the output shape was not determined when I analyse it with Netron


So I used dynamic_axes arg of torch.onnx.export to set it [1,3,512,512].
After config, It seemed right on Netron

So I thought I fixed the error but I still getting the same error.

I wonder if I can get any help.

It is much easier to convert PyTorch models to ONNX without mentioning batch size, I personally use:

import torch
import torchvision
import torch.onnx

# An instance of your model
net = #call model
net = net.cuda()
net = net.eval()

# An example input you would normally provide to your model's forward() method
x = torch.rand(1, 3, 512, 512)

# Export the model
torch_out = torch.onnx._export(net, x, "net.onnx", export_params=True)

Thanks, @azhanmohammed .
But, the result does not change without setting batch size.

@guu do you get the same error when keeping batch size of 1?

@azhanmohammed yes, it gives me the same error message with/without batch size.

Do you by any chance use a .view() or .reshape() operator in the forward call of the model? If that is the case, the issue arises because of this second common issues mentioned here. Try changing your forward call, save the model, and try the export again.

@azhanmohammed
Thanks for checking Lens Studio doc.
Actually I don’t have .view() or .shape() operator in my model.
However, I was thinking of using either of them since the error implies the model needs an explicit output dimension, I believe.
Let me refer to the page when I put .view() or .reshape()

Sure do give that a try, another thing you can try is to first create a JIT (just-in-time) traced model, and then export that to onnx, as per the docs scripting preserves dynamic flow of the model, so that might help as well. Let me know if that works or not.

Thanks for helping me!
It turns out that Lens Studio has some limitations with onnx models that they can import. It’s better to use their provided models only.
I found the CycleGAN model provided by Snap Research and it worked.

1 Like