I was trying to quantize a resnet model. I thought of starting with some toy models, so, instead of coding one myself (also, I am slightly scared with the big model ) I did the following
class CustomResNet(nn.Module): def __init__(self,num_classes,feature_extract): super(CustomResNet, self).__init__() self.resnet = models.resnet18(pretrained=False) self.resnet.conv1 = nn.Conv2d(in_channels=1, out_channels=64, kernel_size=(7,7), stride=(2,2)) #I am working only with b/w images self.resnet.num_classes = num_classes #This is specific to my test case set_parameter_requires_grad(self.resnet, feature_extract) self.quant = torch.quantization.QuantStub() self.dequant = torch.quantization.DeQuantStub() def forward(self, x): x = self.quant(x) x = self.resnet(x) x = self.dequant(x) return x
Then I quantized it and after printing the model, I could see that all the modules have been quantized.
However, when I try to evaluate it using a sample dataset, then, I am getting the following error:
RuntimeError: Could not run 'aten::add_.Tensor' with arguments from the 'QuantizedCPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::add_.Tensor' is only available for these backends: [CPU, CUDA, MkldnnCPU, SparseCPU, SparseCUDA, Meta, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradNestedTensor, UNKNOWN_TENSOR_TYPE_ID, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, Autocast, Batched, VmapMode].
I have assumed that this is because the modules have been quantized, but the skip connections in the model and the corresponding operation have not been done.
Can anyone kindly review and let me know, whether this is the correct assumption?