Please Help me to doing static-quantization with pytorch, I didin’t see my model to be quantized.

Maybe i have some problem that I can’t figure out. Thanks a lot !!!

Below are the code I use:

#########

import torch

import torch.nn as nn

from torchvision import models

from torch.ao.quantization import QuantStub, DeQuantStub, prepare, convert

pruned_model.load_state_dict(torch.load(weight_path),False) #load pruned_model.pt

pruned_model = QuantStub()(pruned_model)

pruned_model = DeQuantStub()(pruned_model)

```
#
pruned_model = prepare(pruned_model, qconfig_mapping)
quantized_model = convert(pruned_model, inplace=False)
# save model's weights
torch.save(pruned_model.state_dict(), "static_quantized_original.pt")
```

#########