Pytorch1.3 Quantization for resnet50, accuracy is zero after fused_modules

Hi

I am experimenting pytorch 1.3 Quantization for resent50. I had taken the pre-trained model from model zoo.

Please find below for accuracy (for 100 images) and size at different stages of my experiment

Size (MB): 102.491395
{"metric": "original_resnet50_val_accuracy", "value": 93.75}
Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
Size (MB): 102.145116
{"metric": "fused_resnet50_val_accuracy", "value": 0.0}
ConvReLU2d(
  (0): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3))
  (1): ReLU()
)

{"metric": "quntize_per_tensor_resent50_val_accuracy", "value": 0.0}
{"metric": "quntize_per_channel_resent50_val_accuracy", "value": 0.0}
Size (MB): 25.65341
QuantizedConvReLU2d(3, 64, kernel_size=(7, 7), stride=(2, 2), scale=1.0, zero_point=0, padding=(3, 3))
Size (MB): 25.957137
QuantizedConvReLU2d(3, 64, kernel_size=(7, 7), stride=(2, 2), scale=1.0, zero_point=0, padding=(3, 3))

Not sure Where it went wrong

PFB code for fusing layers

def fuse_model(m):
    
    modules_to_fuse = [['conv1','bn1',"relu"]]
    torch.quantization.fuse_modules(m, modules_to_fuse,inplace=True)
   
    for mod in m.layer1:
        torch.quantization.fuse_modules(mod, [["conv1","bn1"]],inplace=True)
        torch.quantization.fuse_modules(mod, [["conv2","bn2"]],inplace=True)
        torch.quantization.fuse_modules(mod, [["conv3","bn3","relu"]],inplace=True)
        if mod.downsample:
            torch.quantization.fuse_modules(mod.downsample, [["0","1"]],inplace=True)

    for mod in m.layer2:
        torch.quantization.fuse_modules(mod, [["conv1","bn1"]],inplace=True)
        torch.quantization.fuse_modules(mod, [["conv2","bn2"]],inplace=True)
        torch.quantization.fuse_modules(mod, [["conv3","bn3","relu"]],inplace=True)
        if mod.downsample:
            torch.quantization.fuse_modules(mod.downsample, [["0","1"]],inplace=True)

    for mod in m.layer3:
        torch.quantization.fuse_modules(mod, [["conv1","bn1"]],inplace=True)
        torch.quantization.fuse_modules(mod, [["conv2","bn2"]],inplace=True)
        torch.quantization.fuse_modules(mod, [["conv3","bn3","relu"]],inplace=True)
        if mod.downsample:
            torch.quantization.fuse_modules(mod.downsample, [["0","1"]],inplace=True)

    for mod in m.layer4:
        torch.quantization.fuse_modules(mod, [["conv1","bn1"]],inplace=True)
        torch.quantization.fuse_modules(mod, [["conv2","bn2"]],inplace=True)
        torch.quantization.fuse_modules(mod, [["conv3","bn3","relu"]],inplace=True)
        if mod.downsample:
            torch.quantization.fuse_modules(mod.downsample, [["0","1"]],inplace=True)

    return m 
1 Like

modified My fused_model function now I am seeing same accuracy as non fused model but, Quantiation accuracy is zero.

Size (MB): 102.491395
{"metric": "original_resnet50_val_accuracy", "value": 93.75}
Size (MB): 102.143772
{"metric": "fused_resnet50_val_accuracy", "value": 93.75}
{"metric": "quntize_per_tensor_resent50_val_accuracy", "value": 0.0}
{"metric": "quntize_per_channel_resent50_val_accuracy", "value": 0.0}
Size (MB): 25.653416
Size (MB): 25.957149

Change in fused function is , I have added relu in layer 1 to 4

Its great to see that the accuracy post fusion is high. What are you using for calibration before you quantize the model?

issue with calibration data set, I have selected randomly 1024 training samples from imagenet dataset. Now I can I see some good results.

Size (MB): 102.491395
{"metric": "original_resnet50_val_accuracy", "value": 90.234375}
Size (MB): 102.143772
{"metric": "fused_resnet50_val_accuracy", "value": 90.234375}
calibration
{"metric": "quntize_per_tensor_resent50_val_accuracy", "value": 78.22265625}
after quantization
{"metric": "quntize_per_tensor_resent50_val_accuracy", "value": 88.28125}
calibration
{"metric": "quntize_per_tensor_resent50_val_accuracy", "value": 76.26953125}
after quantization
{"metric": "quntize_per_channel_resent50_val_accuracy", "value": 89.84375}
size of quantization per tensor model
Size (MB): 25.653446
size of quantization per channel model
Size (MB): 25.957137

1 Like

Hey Tiru, can you please share your full code because I am facing same issue.

Hi @Tiru_B, can you share your code as I am having the same issue inspite of adding a relu layer.

U can check in tiru1930 github user Id

2 Likes