Printing weights after integer quantization

I loaded a model which is in FP32 and converted it to int8. But after printing the weights, I am getting floating point values as output, while I expected integers. Can someone let me know what I am missing?

model = torch.ao.quantization.convert(model,inplace=True)
for i in model.parameters():
  print(i)

can you print the model, just to double check that the model is quantized?
also I think the quantized weights are probably not parameters.

How to do that?
print(model) would just print the model’s architecture
It would be helpful if someone provides the code on how to quantize the model given below which is pretrained

from torchvision.models import resnet50, ResNet50_Weights

weights = ResNet50_Weights.DEFAULT
model = resnet50(weights=weights)