Is it possible to quantize model have more than one input?

I’m wondering if it is possible to quantize a model that has 2 inputs (2 tensors). Basically, I having a GAN model in which the generator module requires 2 inputs for inference. Right now, I’m looking at the example with Static Quantization in here but the sample model to be quantized just have 1 input.

1 Like

Yes it is. I am not sure what exactly your model is, but here is an example on how you could make a model with two inputs in the forward

class SomeAwesomeModel(nn.Module):
    def __init__(self):
        self.quant_x = torch.quantization.QuantStub()
        self.quant_y = torch.quantization.QuantStub()
        self.func = nn.quantized.FloatFunctional()
        self.dequant = torch.quantization.DeQuantStub()
 
   def forward(self, x, y):
        qx = self.quant_x(x)
        qy = self.quant_y(y)
        qz = self.func.add(qx, qy)
        z = self.dequant(qz)
        return z

The model above expect two floating point tensors as input, quantizes them, and after performing some funciton, returns the dequantized version.

2 Likes

Thanks a lot @Zafar, this is exactly what I’m looking for. :slight_smile:

@ptrblck Perhaps the need for one QuantStub per model input could be mentioned in the QAT docs? On my first read through this distinction was not clear to me until I started getting errors in my project code

1 Like