Error during Quantazation awer training
quantization code like
backend = “qnnpack”
model.qconfig = torch.quantization.get_default_qat_qconfig(backend)
quantized_model = torch.quantization.prepare_qat(model, inplace=False).to(device)
getting error during convert the model;
model_qat = torch. quantization.convert(model, inplace=False)