Thanks for the reply. The models are pre-trained. I have the same question regarding pretrained and quantized model from PyTorch.
Code:
model = models.quantization.resnet18(pretrained=True, quantize=True)
for param_tensor in model.state_dict():
double_x = model.state_dict()[param_tensor].numpy()

I am getting the following error:
double_x = model.state_dict()[param_tensor].numpy()
TypeError: Got unsupported ScalarType QInt8

Can you please share how to convert to numPy array?

model.parameters() will return all trainable parameters so also the bias params, if available.
Iām not sure if you want to explicitly filter out for the weights, but if so you could use model.named_parameters() and filter for "weight" in each parameterās name.

Thank you for replying, I want to extract the best possible permutation of weights, for which my model gives the lowest loss on the validation set. and do not want to use torch.save(model.state_dict(),path)
after every epoch. rather I want to save it into a variable x, and that x will be updated if my current validation loss (at epoch e) is lesser than the previous validation loss (at epoch e-1).
Is there a way to do that? like save the entire weights congif in a single variable?

Iām unsure why saving the stat_dict into a single file (and overwriting the previous state_dict) in an epoch with a new lowest validation loss wouldnāt work.