Static Quantization of UNet

Update

  • It seems that ConvTranspose2d is not yet supported for quantization. Hence, you have to dequantize the output before passing through each of the unsupported layers, which is slower than the original float model in my case. Related Forum post

  • I guess it;s better to look for models which contain only supported layers in case of static quantization.