What does it mean to 'Ensure that modules are not reused ' in quantization?

In Quantization Workflows explained in the documentations here, there is a listing describing how to go about quantizing a network in which the 2nd item says:

Post Training Static Quantization: This is the most commonly used form of quantization where the weights are quantized ahead of time and the scale factor and bias for the activation tensors is pre-computed based on observing the behavior of the model during a calibration process. Post Training Quantization is typically when both memory bandwidth and compute savings are important with CNNs being a typical use case. The general process for doing post training quantization is:

1.Prepare the model:
         a. Specify where the activations are quantized and dequantized explicitly by adding QuantStub and DeQuantStub modules. 
         b. Ensure that modules are not reused.
         c. Convert any operations that require requantization into modules
 2. ...
 3. etc

What I’m refering to here, is the b. part which says :

b. Ensure that modules are not reused.

What does this mean exactly? is this refering to the QuantStub and DeQuantStub attributes? or not?
I mean, Can I simply do :

input1 = self.quant(input1)
input2 = self.quant(input2)
input3 = self.quant(input3)

and later on do :

output1 = self.dequant(output1)
output2 = self.dequant(output2)
output3 = self.dequant(output3)

if it is, then how are we supposed to incorporate multiple input/multiple output for quantization/dequantization?

if not! then I dont understand exactly what this means and would greatly appreciate if anyone can shed some l ight on this .

By the way, I would also appreciate it, if anyone could clarify what this '97 mean in the following section (which is the item 3 of the same list I just asked about):

  1. Specify the configuration of the quantization methods ‘97 such as selecting symmetric or asymmetric quantization and MinMax or L2Norm calibration techniques.

Thanks a lot in advance