[Nightly] Packed params no longer returned via state_dict() method

Hi, I’ve installed the nightly build 1.6.0.dev20200607 today and ran my scripts that exercises quantization and jit.

Until v1.5, I was able to get all packed params of a top level quantized module, say quantized resnet in torchvision, via state_dict() method. But now with nightly, I only get quant.scale and quant.zero_point from the same module.

I also noticed that a packed param is now an instance of torch._C.ScriptObject, instead of QTensor as was the case until v1.5

How do I get all parameters from quantized + jitted model now? Can you point me to github issues/PRs that introduced relevant changes?

@jerryzh168 @raghuramank100

yeah, https://github.com/pytorch/pytorch/pull/35923 and https://github.com/pytorch/pytorch/pull/34140 are relevant changes.

We are using TorchBind object for the packed params now.

Thanks, I’ll take a look. Seems like a big change.

Hi @jerryzh168,

Now that v1.6 is out, I came back to this issue. As I mentioned, state_dict() method on traced quantized networks like qresnet from torchvision no longer returns all quantized parameters.

After some digging and thanks to the onnx export implementation below, I found that I can use torch._C._jit_pass_lower_graph(graph, model._c) to get at quantized paremeters I’ve been looking for. Is this the recommend way for third party pkg like TVM to get quantized parameters? Having to pass model._c seems like a very internal API…

cc @James_Reed can you take a look?

does the conv packed params/linear packed params appear in state_dict? you can call unpack on these object to get the parameters I think: https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/quantized/cpu/conv_packed_params.h#L17

Ok here is the test script.

import torch
from torchvision.models.quantization import mobilenet as qmobilenet


def quantize_model(model, inp):
    model.fuse_model()
    model.qconfig = torch.quantization.get_default_qconfig('fbgemm')
    torch.quantization.prepare(model, inplace=True)
    model(inp)
    torch.quantization.convert(model, inplace=True)


qmodel = qmobilenet.mobilenet_v2(pretrained=True).eval()

pt_inp = torch.rand((1, 3, 224, 224))
quantize_model(qmodel, pt_inp)
script_module = torch.jit.trace(qmodel, pt_inp).eval()

graph = script_module.graph
print(script_module.state_dict())
_, params = torch._C._jit_pass_lower_graph(graph, script_module._c)

The model is quantized mobilenet v2 from torchvision. The output of state_dict() from above script is different between v1.6 and v1.5.1:

  • With v1.5.1, all packed quantized parameters (conv and linear) are returned. Unpacking is also no problem.
  • With v1.6, I only get OrderedDict([('quant.scale', tensor([0.0079])), ('quant.zero_point', tensor([0]))]) , so there is nothing that can be unpacked.

In both version, the last line in the above script, torch._C._jit_pass_lower_graph(graph, script_module._c), returns all quantized parameters. So technically my original problem is solved. My question is if this is an expected behavior.

probably not, I’ll create an issue for this, this for reporting

actually I think you are not supposed to call state_dict on a torchscript model, could you call state_dict before script/trace instead?

We (TVM) take jitted models as input, so we don’t get to see the original models. Fortunately, I found a workaround for this problem without the use of state_dict, so it is no longer a problem for us. Thanks.

1 Like