How could I quantize models recursively and load the quantized model as float?

Hi,

I want to use dynamic mode to quantize my model. My code is like this:

import torch
import torch.nn as nn
import torchvision

net = torchvision.models.resnet50()

qnet = torch.quantization.quantize_dynamic(net, qconfig_spec={nn.Conv2d, nn.BatchNorm2d, nn.Linear}, dtype=torch.qint8)

I found that only the last nn.linear module is converted to DynamicQuantizedLinear with other modules not changed. What is my problem and how could I make it work ?

By the way, I would like to convert the quantized model back to a float model(float32 or float16). Somethis like this:

class Model(nn.Module):
    ....

fmodel = Model()
fmodel.load_quantized_state_dict(torch.load('qstate.pth'))
f.forward(torch.randn(1,3, 32, 32).float())

How could I do this please ?