Questions about QAT v2


Previously I had problems with quantizable version of segmentation pipeline but installing torch from master branch surprisingly made it working.

Now I am trying to perform QAT with regression model which basically has almost the same encoder architecture but different detection decoder (stacked linears, dropouts, relus, cat operations and sigmoid on top of it).

I am doing everything according to the manual, but nothing is working - after some steps my model stops training well (comparing with regular model).

I tried these hypotheses in order to find the problem:

  1. Different torch versions: 1.7.1, 1.8.0, 1.8.1, from master source;
  2. Wrapped .cat operation in decoder with FloatFunctional;
  3. Tried training with both fused and not fused modules;
  4. Put QuantStub and DeQuantStub at the same hierarchy level in the model;

Do you have maybe other hypotheses I have to try to test?
Or is there any way I can debug quantization mode to find the reason?

Thanks in advance!