AttributeError: 'QFunctional' object has no attribute 'activation_post_process'

I’m trying to quantise a simple net wit QFunctional for adding 2 quantised tensors

class MinimalModel(Module):
  
    def __init__(self):
        super().__init__()
        self.quant_x = torch.quantization.QuantStub()
        self.quant_y = torch.quantization.QuantStub()
        self.x_y_add = torch.nn.quantized.QFunctional()

    def forward(self, x, y):
        y = self.quant_y(y)
        x = self.quant_x(x)
        result = self.x_y_add.add(y, x) # xi + mean
        return result

model = MinimalModel().cpu()
model.qconfig = torch.quantization.get_default_qat_qconfig('fbgemm')
model = torch.quantization.prepare_qat(model)
model = torch.quantization.convert(model) 
x =torch.rand((1,48,2,2))
y =torch.rand( (1,48,2,2))
output = model(x, y)

I get an error:

File “/home/guests/vira/repos/cross-platform-support/minimal_example_add.py”, line 102, in forward
result = self.x_y_add.add(y, x) # xi + mean
File “/usr/local/lib/python3.9/site-packages/torch/nn/quantized/modules/functional_modules.py”, line 187, in add
r = self.activation_post_process(r)
File “/usr/local/lib/python3.9/site-packages/torch/nn/modules/module.py”, line 1181, in getattr
raise AttributeError(“‘{}’ object has no attribute ‘{}’”.format(
AttributeError: ‘QFunctional’ object has no attribute ‘activation_post_process’

1 Like

Hi @thekoshkina , if you replace torch.nn.quantized.QFunctional with torch.nn.quantized.FloatFunctional, it should work.

FloatFunctional is what you want in your floating point model. The framework will swap FloatFunctional to QFunctional during the convert step.

Side note, feel free to try out this workflow (Quantization — PyTorch master documentation) which can insert the quants/dequants for you automatically, and can handle functions and methods.

1 Like

Thank you! It worked