Cannot quantize nn.Conv2d with dynamic Quantization

I’ve read the pytorch quantization document , and I think it should quantize nn.Conv2d module as well as nn.Linear. But, when I try the dynamic quantization, it only converts the nn.Linear.

I used the following simple dummy test:

class dumy_CNN(nn.Module):
    def __init__(self, ni, no):
        super().__init__()              
        self.conv = nn.Conv2d(ni, no, 8, 2, 3)
        self.lin = nn.Linear(1024, 4)
    def forward(self, x):
        out = self.lin(self.conv(x))
        return out    

model_test = dumy_CNN(2,10)
model_qn = torch.quantization.quantize_dynamic(
        model_test, {nn.Linear, nn.Conv2d} , dtype= torch.qint8
        )

But the model_qn looks like:

dumy_CNN(
 (conv): Conv2d(2, 10, kernel_size=(8, 8), stride=(2, 2), padding=(3, 3))
  (lin): DynamicQuantizedLinear(in_features=1024, out_features=4, scale=1.0, zero_point=0)
)

I also checked the weights to make sure about the above issue:

model_qn.conv.weight.data[0,0,0,0].item()
0.02230245992541313

Hi @babak_hss, Dynamic quantization is currently supported only for nn.Linear and nn.LSTM, please see: https://pytorch.org/docs/stable/quantization.html#torch.quantization.quantize_dynamic

2 Likes