Hi, I have a model which include
prelu layer, not support to quantize in current pytorch version, how to quantize this model for x86 CPU now? I try to define this model with the following format:
self.convbn1 = QuantizableConvBNBlock(xxx) (has defined) self.prelu = nn.PReLU() self.convbn2 = QuantizableConvBNBlock(xxx) self.quant = torch.quantization.QuantStub() self.dequant = torch.quantization.DeQuantStub() def forward(self, x): x = self.quant(x) x = self.convbn1(x) x = self.dequant(x) x = self.prelu(x) x = self.quant(x) x = self.convbn2(x) ...
but I found after perform the quantization-aware training following tutorial, eval result is very terrible, what is the reason and how to solve it ?