Accessing weights and bias values of a pretrained and quantized model

How can we access all weights and biases for a pretrained and quantized model of a neural network?
I have downloaded the model using the following command:

model = models.quantization.resnet18(pretrained=True, quantize=True)
for param_tensor in model.state_dict():
print(’ For param_tensor is ',param_tensor)
if model.state_dict()[param_tensor].dtype==torch.qint8:
double_x=(torch.int_repr(model.state_dict()[param_tensor]).numpy())
else:
double_x=(model.state_dict()[param_tensor]).detach().numpy()

This leads to the following error:
For param_tensor is fc._packed_params.dtype
Traceback (most recent call last):
File “compression_quantize_test.py”, line 63, in
if model.state_dict()[param_tensor].dtype==torch.qint8:
AttributeError: ‘torch.dtype’ object has no attribute ‘dtype’

Can anyone let me know the correct way to access all the weights and biases?

You are seeing this because the quantized model saves the weights for fully connected layers using this odd _packed_params object. That is what is actually stored in the state_dict.

Try to just print the objects in the state_dict to see. And then check what is in the _packed_params object. That way you’ll know how to access weights for FC layers.