Simple quantized model doesn't export to ONNX

Hello, I’m having problems exporting a very simple quantized model to ONNX. The error message I’m seeing is -

AttributeError: 'torch.dtype' object has no attribute 'detach'

The cause of this is that (‘fc1._packed_params.dtype’, torch.qint8) is ends up in the state_dict.

I asked on a previous (and old) thread if there was a solution and the answer was that this could be solved in the latest version of PyTorch. So I installed 1.7.0.dev20200705+cpu, but no joy.

I’ve pasted the example below.

Any thoughts on whether this is a fault on my part, a bug, or not supported, greatly appreciated.

#Import libraries
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
#Needed for quantization
from torch.quantization import QuantStub, DeQuantStub
import torch.quantization

class Net(nn.Module):
    def __init__(self):
        #create instance of base class
        super().__init__()
        self.fc1 = nn.Linear(28*28, 10) #Inputs, outputs
    
        #Optimizer parameters
        self.learning_rate = 0.01
        self.epochs = 10
        self.log_interval = 10
        self.batch_size=200

        #Needed for quantization, per pytorch examples
        self.quant = QuantStub()
        self.dequant = DeQuantStub()

        #Training related functions
        self.optimizer = optim.SGD(self.parameters(), lr=self.learning_rate, momentum=0.9)
        self.criterion = nn.NLLLoss()

    def forward(self, x, save_intermediate = False, count=0):
        x1 = self.quant(x)
        x2 = self.fc1(x1)
        x3 = self.dequant(x2)
        return x3


net = Net()

net.qconfig = torch.quantization.default_qconfig
torch.quantization.prepare(net, inplace=True)
torch.quantization.convert(net, inplace=True)

torch.onnx.export(net,
      torch.zeros([1,784]),
      'simple.onnx',
      opset_version=11,
      verbose=True
)

General export of quantized models to ONNX isn’t currently supported. We currently only support conversion to ONNX for Caffe2 backend. This thread has additional context on what we currently support - ONNX export of quantized model

@supriyar, many thanks.

can you print the type of fc1._packed_params

As mentioned in the other thread by @supriyar
can you try

torch.onnx.export(q_model, pt_inputs, f, input_names=input_names, example_outputs=output,
                  operator_export_type=torch.onnx.OperatorExportTypes.ONNX_ATEN_FALLBACK)

@supriyar and @jerryzh168, many thanks again.

Following the example, I’ve managed to get the model to convert. Is it the ATen fallback that forces the exporter to export specifically to Caffe2? Is Caffe2 effectively a subset?

My goal is to get this model through the Glow compiler. This supports both importing ONNX and Caffe2 but still seeing issues with unknown element kind. A long shot as this is not the correct forum, but any experience on whether this is doable?

I’m not familiar with onnx export, here is the doc for that: https://github.com/pytorch/pytorch/blob/master/docs/source/onnx.rst#onnx-aten-fallback

I think we are integrating with glow now, I’ll ask someone from glow team to answer the question.

still seeing issues with unknown element kind

Is this error during importing to Glow or with ONNX export?

Glow’s onnx importer doesn’t match up with the latest ONNX very closely so you could run into some issues. Not sure what model you’re working with but you could also try using to_glow to lower from PyTorch to Glow more directly, though this path is fairly new and has been tested mostly on ResNet-like models. (like this)

1 Like

Hi, does pytorch support exporting of quantized models to ONNX currently?

1 Like

We currently don’t support exporting pytorch quantized models to ONNX. We welcome suggestions and contributions for this!

@supriyar
Does the quantized model -->onnx supported today?

@bigtree I still get error:

line 71, in _unique_state_dict
    filtered_dict[k] = v.detach()
AttributeError: 'torch.dtype' object has no attribute 'detach'

even set ONNX_ATEN_FALLBACK