How can we access all weights and biases for a pretrained and quantized model of a neural network?

I have downloaded the model using the following command:

model = models.quantization.resnet18(pretrained=True, quantize=True)

for param_tensor in model.state_dict():

print(’ For param_tensor is ',param_tensor)

if model.state_dict()[param_tensor].dtype==torch.qint8:

double_x=(torch.int_repr(model.state_dict()[param_tensor]).numpy())

else:

double_x=(model.state_dict()[param_tensor]).detach().numpy()

This leads to the following error:

For param_tensor is fc._packed_params.dtype

Traceback (most recent call last):

File “compression_quantize_test.py”, line 63, in

if model.state_dict()[param_tensor].dtype==torch.qint8:

AttributeError: ‘torch.dtype’ object has no attribute ‘dtype’

Can anyone let me know the correct way to access all the weights and biases?