Convert torch model metts error : ""number of dims don't match in permute

System information

  • OS Platform and Distribution : Linux Ubuntu 18.04
  • ONNX version : 1.9.0
  • Python version: 3.8.10
  • Torch version: 1.8.0

Describe the bug

I am trying to convert torch model(.pt) to onnx format but meets unexpected error export failure: number of dims don't match in permute, I inspect into model structure and find out where the problem is and the module is shown below:

import onnx
import torch 
import torch.nn as nn
import torch.nn.functional as F 
 
class Integral(nn.Module): 

    def __init__(self, reg_max= 16):
        super(Integral, self).__init__()
        self.reg_max = reg_max
        self.register_buffer('project',
                             torch.linspace(0, self.reg_max, self.reg_max + 1))

    def forward(self, x):
        #x.shape (1, 3549, 68)
        x = F.softmax(x.reshape(x.shape[0], -1, self.reg_max + 1), dim = 2)
        #x.shape(1, 14196, 17)
        x = F.linear(x, self.project.type_as(x)).reshape(x.shape[0],-1, 4) 
        return  x

#model setting
model = Integral()
model.to(device) #cpu or gpu 
model.eval()
#input
x = torch.randn(1, 3549, 68).to(device)
f = opt.weights.replace('.pt', '.onnx')  # onnx filename

#opset_version=12
#opt.train =False
torch.onnx.export(model, x, f, verbose=False, opset_version=opt.opset_version, input_names=['images'],
                    training=torch.onnx.TrainingMode.TRAINING if opt.train else torch.onnx.TrainingMode.EVAL,
                    do_constant_folding=True,
                    dynamic_axes= None)

The code above gives error : RuntimeError: number of dims don't match in permute and I don’t see any wrong with tensor operation in foward . Can anyone help me out with this? Thanks!

The problem has been solved, I change

self.register_buffer('project'), torch.linspace(0, self.reg_max,self.reg_max + 1) 

to

self.project = torch.arange(0, self.reg_max + 1 ).float() 

and everything works fine.

Hi two_Two,
I met the same error. I have changed as you suggest but it doesn’t work. Problem maybe come from F.linear function. Did you change anything else to make that work? Thank you :smiley:

1 Like

My problem has been solved, I change

x = F.linear(x, self.project.type_as(x)).reshape(x.shape[0],-1, 4)

to

self.project = torch.t(self.project)
x = torch.matmul(x, self.project).reshape(x.shape[0], -1, 4)
1 Like

The method I provided was actually wrong, I would try yours once I am free to do this !