Onnx export failed int8 model

Is quantize_per_tensor not supported by ONNX? Will more ops(like PReLU) be supported by nn.quantized?

It’s not yet supported, we are still figuring out the plan for quantization support in ONNX.

pytorch1.4.0 is supported for quantized for onnx?

@supriyar has tested the quantization in onnx with one of our internal models, but I’m not sure about the long term plans for that. @supriyar can you comment?

The support that exists currently is for Pytorch -> ONNX -> Caffe2 path. The intermediate onnx operators contain references to the C2 ops so cannot be executed standalone in ONNX. See https://github.com/pytorch/pytorch/blob/master/torch/onnx/symbolic_caffe2.py for more info.

Hi, I’ve read your answer, but I am confused. You need first an onnx model which you later convert to caffe2. But if I get an error when exporting to onnx, how I can get to second step?

could you paste the error message?

I installed the nightly version of Pytorch.

torch.quantization.convert(model, inplace=True)
torch.onnx.export(model, img, “8INTmodel.onnx”, verbose=True)


Traceback (most recent call last):
  File "check_conv_op.py", line 92, in <module>
    quantize(img)
  File "check_conv_op.py", line 59, in quantize
    torch.onnx.export(model, img, "8INTmodel.onnx", verbose=True)
  File "/usr/local/lib/python3.7/site-packages/torch/onnx/__init__.py", line 168, in export
    custom_opsets, enable_onnx_checker, use_external_data_format)
  File "/usr/local/lib/python3.7/site-packages/torch/onnx/utils.py", line 69, in export
    use_external_data_format=use_external_data_format)
  File "/usr/local/lib/python3.7/site-packages/torch/onnx/utils.py", line 485, in _export
    fixed_batch_size=fixed_batch_size)
  File "/usr/local/lib/python3.7/site-packages/torch/onnx/utils.py", line 334, in _model_to_graph
    graph, torch_out = _trace_and_get_graph_from_model(model, args, training)
  File "/usr/local/lib/python3.7/site-packages/torch/onnx/utils.py", line 282, in _trace_and_get_graph_from_model
    orig_state_dict_keys = _unique_state_dict(model).keys()
  File "/usr/local/lib/python3.7/site-packages/torch/jit/__init__.py", line 302, in _unique_state_dict
    filtered_dict[k] = v.detach()
AttributeError: 'torch.dtype' object has no attribute 'detach'

looks like it’s calling detach on a dtype object, could you paste check_conv_op.py?