What else is done after forward?

Hi,

	out  = self.qconv5_qbn5_qrelu5(out4) 
	out5 = self.qpool5(out)

	y = self.qinterpolate1(out5)

For the upper code, out5 is correct.

in self.qinterpolate1(out5) forward function, the return value x is also correct.

class QInterpolate(QModule):
    def __init__(self, qi=True, qo=True, num_bits=8, scale_factor=2, mode='bilinear', align_corners=True):
        super(QInterpolate, self).__init__(qi=qi, qo=qo, num_bits=num_bits)
        self.num_bits = num_bits
        self.scale_factor=scale_factor
        self.mode=mode
        self.align_corners = align_corners

    def forward(self, x):
        if hasattr(self, 'qi'):
            self.qi.update(x)
            x = FakeQuantize.apply(x, self.qi)
        x = F.interpolate(x, scale_factor=self.scale_factor, mode=self.mode, align_corners=self.align_corners)
        if hasattr(self, 'qo'):
            self.qo.update(x)
            x = FakeQuantize.apply(x, self.qo)
        return x

however,
y = self.qinterpolate1(out5) in the first piece of code is not correct.

More details:
x is:

while y is:
2dfd904270dbacdd18d28ab4a562d55

I cannot understand why round function is applied to the returned tensor y?

Thanks for the detailed question, can you paste the data as code and not as an image though? Will make it easier to debug.

@yiftach
Thank you for your reply!

I am debugging the code on Windows 10 with vscode. The data you mentioned can’t be copied. I am trying to save the data to txt file and then paste to the forum.

however, I suddenly found a very weird issue:
please see the following picture, the data in y is different from yy, but I didn’t change anything in y. I just sliced y[0,0,:,:] to make yy

Interesting, but it is still hard to debug your issue without a complete code example. You could just print the tensor instead of saving to a file (you will not need to alter it). Try to see if this still happens for a small tensor, and if so - print the input too, so we can have a code snippet we can run to reproduce and study the issue.

@yiftach

Please check the data I printed.
It is so weird!!!

I saw that the y in vscode is round-off, however, the printed y is correct.


print x
tensor([[[[0.0000, 0.3015, 0.6108,  ..., 1.4767, 0.9974, 0.5180],
          [0.0000, 0.1933, 0.3943,  ..., 1.7086, 1.2602, 0.8118],
          [0.0000, 0.0850, 0.1701,  ..., 1.9406, 1.5231, 1.1056],
          ...,
          [0.0000, 0.2087, 0.4175,  ..., 1.7473, 1.3066, 0.8659],
          [0.0000, 0.2629, 0.5180,  ..., 1.4226, 0.9278, 0.4330],
          [0.0000, 0.3093, 0.6185,  ..., 1.0979, 0.5489, 0.0000]],

         [[0.0000, 0.3015, 0.6108,  ..., 1.4767, 0.9974, 0.5180],
          [0.0000, 0.1933, 0.3943,  ..., 1.7086, 1.2602, 0.8118],
          [0.0000, 0.0850, 0.1701,  ..., 1.9406, 1.5231, 1.1056],
          ...,
          [0.0000, 0.2087, 0.4175,  ..., 1.7473, 1.3066, 0.8659],
          [0.0000, 0.2629, 0.5180,  ..., 1.4226, 0.9278, 0.4330],
          [0.0000, 0.3093, 0.6185,  ..., 1.0979, 0.5489, 0.0000]],

         [[0.0000, 0.3015, 0.6108,  ..., 1.4767, 0.9974, 0.5180],
          [0.0000, 0.1933, 0.3943,  ..., 1.7086, 1.2602, 0.8118],
          [0.0000, 0.0850, 0.1701,  ..., 1.9406, 1.5231, 1.1056],
          ...,
          [0.0000, 0.2087, 0.4175,  ..., 1.7473, 1.3066, 0.8659],
          [0.0000, 0.2629, 0.5180,  ..., 1.4226, 0.9278, 0.4330],
          [0.0000, 0.3093, 0.6185,  ..., 1.0979, 0.5489, 0.0000]],

         ...,

         [[0.0000, 0.3015, 0.6108,  ..., 1.4767, 0.9974, 0.5180],
          [0.0000, 0.1933, 0.3943,  ..., 1.7086, 1.2602, 0.8118],
          [0.0000, 0.0850, 0.1701,  ..., 1.9406, 1.5231, 1.1056],
          ...,
          [0.0000, 0.2087, 0.4175,  ..., 1.7473, 1.3066, 0.8659],
          [0.0000, 0.2629, 0.5180,  ..., 1.4226, 0.9278, 0.4330],
          [0.0000, 0.3093, 0.6185,  ..., 1.0979, 0.5489, 0.0000]],

         [[0.0000, 0.3015, 0.6108,  ..., 1.4767, 0.9974, 0.5180],
          [0.0000, 0.1933, 0.3943,  ..., 1.7086, 1.2602, 0.8118],
          [0.0000, 0.0850, 0.1701,  ..., 1.9406, 1.5231, 1.1056],
          ...,
          [0.0000, 0.2087, 0.4175,  ..., 1.7473, 1.3066, 0.8659],
          [0.0000, 0.2629, 0.5180,  ..., 1.4226, 0.9278, 0.4330],
          [0.0000, 0.3093, 0.6185,  ..., 1.0979, 0.5489, 0.0000]],

         [[0.0000, 0.3015, 0.6108,  ..., 1.4767, 0.9974, 0.5180],
          [0.0000, 0.1933, 0.3943,  ..., 1.7086, 1.2602, 0.8118],
          [0.0000, 0.0850, 0.1701,  ..., 1.9406, 1.5231, 1.1056],
          ...,
          [0.0000, 0.2087, 0.4175,  ..., 1.7473, 1.3066, 0.8659],
          [0.0000, 0.2629, 0.5180,  ..., 1.4226, 0.9278, 0.4330],
          [0.0000, 0.3093, 0.6185,  ..., 1.0979, 0.5489, 0.0000]]]],
       device='cuda:0', grad_fn=<FakeQuantizeBackward>)
print y
tensor([[[[0.0000, 0.3015, 0.6108,  ..., 1.4767, 0.9974, 0.5180], 
          [0.0000, 0.1933, 0.3943,  ..., 1.7086, 1.2602, 0.8118], 
          [0.0000, 0.0850, 0.1701,  ..., 1.9406, 1.5231, 1.1056], 
          ...,
          [0.0000, 0.2087, 0.4175,  ..., 1.7473, 1.3066, 0.8659], 
          [0.0000, 0.2629, 0.5180,  ..., 1.4226, 0.9278, 0.4330], 
          [0.0000, 0.3093, 0.6185,  ..., 1.0979, 0.5489, 0.0000]],

         [[0.0000, 0.3015, 0.6108,  ..., 1.4767, 0.9974, 0.5180], 
          [0.0000, 0.1933, 0.3943,  ..., 1.7086, 1.2602, 0.8118], 
          [0.0000, 0.0850, 0.1701,  ..., 1.9406, 1.5231, 1.1056], 
          ...,
          [0.0000, 0.2087, 0.4175,  ..., 1.7473, 1.3066, 0.8659], 
          [0.0000, 0.2629, 0.5180,  ..., 1.4226, 0.9278, 0.4330], 
          [0.0000, 0.3093, 0.6185,  ..., 1.0979, 0.5489, 0.0000]],

         [[0.0000, 0.3015, 0.6108,  ..., 1.4767, 0.9974, 0.5180], 
          [0.0000, 0.1933, 0.3943,  ..., 1.7086, 1.2602, 0.8118], 
          [0.0000, 0.0850, 0.1701,  ..., 1.9406, 1.5231, 1.1056], 
          ...,
          [0.0000, 0.2087, 0.4175,  ..., 1.7473, 1.3066, 0.8659], 
          [0.0000, 0.2629, 0.5180,  ..., 1.4226, 0.9278, 0.4330], 
          [0.0000, 0.3093, 0.6185,  ..., 1.0979, 0.5489, 0.0000]],

         ...,

         [[0.0000, 0.3015, 0.6108,  ..., 1.4767, 0.9974, 0.5180], 
          [0.0000, 0.1933, 0.3943,  ..., 1.7086, 1.2602, 0.8118], 
          [0.0000, 0.0850, 0.1701,  ..., 1.9406, 1.5231, 1.1056], 
          ...,
          [0.0000, 0.2087, 0.4175,  ..., 1.7473, 1.3066, 0.8659], 
          [0.0000, 0.2629, 0.5180,  ..., 1.4226, 0.9278, 0.4330], 
          [0.0000, 0.3093, 0.6185,  ..., 1.0979, 0.5489, 0.0000]],

         [[0.0000, 0.3015, 0.6108,  ..., 1.4767, 0.9974, 0.5180], 
          [0.0000, 0.1933, 0.3943,  ..., 1.7086, 1.2602, 0.8118], 
          [0.0000, 0.0850, 0.1701,  ..., 1.9406, 1.5231, 1.1056], 
          ...,
          [0.0000, 0.2087, 0.4175,  ..., 1.7473, 1.3066, 0.8659],
          [0.0000, 0.2629, 0.5180,  ..., 1.4226, 0.9278, 0.4330],
          [0.0000, 0.3093, 0.6185,  ..., 1.0979, 0.5489, 0.0000]],

         [[0.0000, 0.3015, 0.6108,  ..., 1.4767, 0.9974, 0.5180],
          [0.0000, 0.1933, 0.3943,  ..., 1.7086, 1.2602, 0.8118],
          [0.0000, 0.0850, 0.1701,  ..., 1.9406, 1.5231, 1.1056],
          ...,
          [0.0000, 0.2087, 0.4175,  ..., 1.7473, 1.3066, 0.8659],
          [0.0000, 0.2629, 0.5180,  ..., 1.4226, 0.9278, 0.4330],
          [0.0000, 0.3093, 0.6185,  ..., 1.0979, 0.5489, 0.0000]]]],
       device='cuda:0', grad_fn=<FakeQuantizeBackward>)

@yiftach

Thank you for you reply!

This issue is fixed.
The reason is not related with Pytorch. It is the reason of VScode.
For any unknown reason, I cleaned all vscode cache files, and then, the issue is fixed.