Quantization error about _conv_transpose2d

Hey,Guys!I had tried to use dequant and quant to ignore the convtranspose2d like this:

class Conv2dTranspose(nn.Module):
    def __init__(self, cin, cout, kernel_size, stride, padding, output_padding=0, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.conv_block = nn.Sequential(
                            nn.ConvTranspose2d(cin, cout, kernel_size, stride, padding, output_padding),
                            nn.BatchNorm2d(cout),
                            nn.ReLU()
                            )

        #self.act = nn.ReLU()
        self.quant = torch.ao.quantization.QuantStub()
        self.dequant = torch.ao.quantization.DeQuantStub()

        def get_middle_todequant(module,args):
            output = self.dequant(args[0])
            module.qconfig = None
            return (output)

        def get_middle_toquant(module,args,output):
            output = self.quant(output)
            return output
        self.handle = self.conv_block[0].register_forward_pre_hook(get_middle_todequant)
        self.handle2 = self.conv_block[0].register_forward_hook(get_middle_toquant)
    def forward(self, x):
        out = self.conv_block(x)
        # out = self.dequant(out)
        return out

but still has an error like this:

NotImplementedError: Could not run 'aten::slow_conv_transpose2d.out' with arguments from the 'QuantizedCPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build).

My torch version is 2.0.1+cu17,Is there any problems?

either your stubs aren’t getting converted to actual quant/dequant or your prehook isn’t actually updating the output (i think you might need to either modify args directly or not use hooks for this)

Thank you for providing the idea. In fact, the hook did not work properly. I have modified the code to ensure that the input and output of the ConvTranspose2d layer use quant and dequant