Can nn.quantized.FloatFunctional().cat() be used multiple times in one module?

When I was reading the source code of torchvision.models.quantization.inception_v3, I found is used 3 times in QuantizableInceptionE, so when I finished the training, there is only one group of quantization params (min_val/max_val/scale/zeros_point). If I understand correctly, we need 3 different group of quantization params for each concat operation.
Can any one help to explain whether it’s a bug here or I misunderstood it?

You are right, looks like we need 3 different Could you file an issue for it? We will take a look.

Thanks! I saw it has been solved in the newest version by this commit.