How do I dequantize a state dictionary?

is it possible to dequantize the entire state dictionary as a whole?

currently my plan is to dequantize layer by layer.
for the conv2d layer, i can just dequantize the “.weight” and use the “.bias” as is but im having issues with the Linear Layer looking through the keys i see

‘fc1.scale’, ‘fc1.zero_point’, ‘fc1._packed_params.dtype’, ‘fc1._packed_params._packed_params’

but i do not know how to convert them into just

“fc1.weight”, “fc1.bias”

        sdict = self.model.state_dict()

        sdict['conv1.weight'] = torch.dequantize(state_dict['conv1.weight'])
        sdict['conv1.bias'] = torch.dequantize(state_dict['conv1.bias'])
        sdict['conv2.weight'] = torch.dequantize(state_dict['conv2.weight'])
        sdict['conv2.bias'] = torch.dequantize(state_dict['conv2.bias'])
        sdict['fc1.weight'] = torch.dequantize(state_dict['fc1._packed_params._packed_params'][0])
        sdict['fc1.bias'] = torch.dequantize(state_dict['fc1._packed_params._packed_params'][1])
        sdict['fc2.weight'] = torch.dequantize(state_dict['fc2._packed_params._packed_params'][0])
        sdict['fc2.bias'] = torch.dequantize(state_dict['fc2._packed_params._packed_params'][1])

well i managed to solve it by doing this since apparently the bias and weight for a linear layer is combined together