Fuse_modules more sequence support

I noticed that torch.ao.quantization.fuse_modules only supports a few sequences. Currently I have a model architecture of [‘conv1d’, ‘ReLu’, ‘BatchNorm1d’,], and this is not supported at the moment. I have two questions, when will this sequence be supported? And what is the workaround for this situation? If I dont run the module fusion, I get the following error: “RuntimeError: Could not run ‘aten::native_batch_norm’ with arguments from the ‘QuantizedCPU’ backend.”

Thanks!

Hi Hua Chen,

Actually fusing the modules you have listed should be supported with the fuse_modules API. You can find an example of how this is done in this tutorial (search for ConvBNReLU). As for the error you’re getting, it usually means you’re passing a quantized tensor to a non-quantized kernel, so you may be missing a DeQuantStub somewhere in your model. Please see this doc for more detail.

Best,
-Andrew

Hi Andrew:
Thanks for your reply! Here is the error message I get from fuse_modules:

AssertionError: did not find fuser method for: (<class ‘torch.nn.modules.conv.Conv1d’>, <class ‘torch.nn.modules.activation.ReLU’>, <class ‘torch.nn.modules.batchnorm.BatchNorm1d’>)

In the tutorial, the order of fusion modules are ‘conv’ + ‘bn’ + ‘relu’ where mine is ‘conv’+‘relu’+bn". Would that be a problem?

Best,
Hua

Hi Hua Chen,

My mistake. I misread the order of the ops. Unfortunately conv + relu + bn is currently not supported as you suggested, and I am not aware of any immediate plans to add support for it. What you can do is only fuse conv + relu, which is supported (see this link for a full list of supported patterns).

Best,
-Andrew