Unable to quantize the model due to RuntimeError

I am trying to quantize a model which has upsampling layers at specific parts of network but unable to quantize it due to this Error.

Expected a value of type ‘Tensor’ for argument ‘target_size’ but instead found type ‘List[int]’.
Inferred ‘target_size’ to be of type ‘Tensor’ because it was not annotated with an explicit type.

Implementation of the upsampling layer

class Upsample(nn.Module):
    def __init__(self):
        super(Upsample, self).__init__()

    def forward(self, x, target_size):
        # assert (x.data.dim() == 4)

        _, _, tH, tW = target_size[0], target_size[1], target_size[2], target_size[3]

        B = x.data.size(0)
        C = x.data.size(1)
        H = x.data.size(2)
        W = x.data.size(3)

        return x.view(B, C, H, 1, W, 1).expand(B, C, H, tH // H, W, tW // W).contiguous().view(B, C, tH, tW)

Upsampling function usage

up = self.upsample1(x7, downsample4.size())

Quantization code

model.qconfig = torch.quantization.get_default_qconfig('fbgemm')
print(model.qconfig)

torch.quantization.prepare(model, inplace=True)

print('Post Training Quantization Prepare : Inserting Observers')
print('\n Downsampling1 Block: After observer insertion \n\n', model.down1.conv1)

torch.quantization.convert(model, inplace=True)
print("Post Training Quantization : Convert Done!")
print("\n Downsampling1Block: After quantization \n\n", model.down1.conv1)
torch.jit.save(torch.jit.script(model), quantized_model_path)

This is my first time trying to quantize a model in pytorch so I am totally clueless on how to solve this. Thanks in advance.

The error doesn’t seem related to quantization. Since the upsample1 is a custom module, you cannot quantize it currently so the quant, dequant stubs need to be inserted in the model correctly.

this is an error from torch.jit.script, you can annotate the forward like following:

 def forward(self, x, target_size):
        # type : (Tensor, List[int]) -> Tensor
        # assert (x.data.dim() == 4)

to make it scriptable.

Thanks for the reply. Yes, that seems to be the case. I converted the target_size to torch tensor before passing it to upsample function so that kinda solved the problem.

From this

x7 = self.conv7(x6)
# UPSAMPLE
 up = self.upsample1(x7, downsample4.size())

to

x7 = self.conv7(x6)
# UPSAMPLE
featuremap_size = torch.tensor(downsample4.size())
up = self.upsample1(x7, featuremap_size, self.inference)

But the model that I am trying to optimize is YoloV4 and it has some activation functions(mish and softplus) not supported on pytorch’s quantization. Therefore, even after this solution, I was not able to quantize the model in the end.

you can still surround these ops with DequantStub and QuantStub