Issue with Quantization

I have input image of shape 32*1*32*128 where 32 is my batch size(index 0).
I want to quantize my model. When I call my evaluate function then it shows this error

 File "/media/ai/ashish/OCR/Text_Recognition/modules/", line 158, in build_P_prime
    batch_size, 3, 2).float().to(device)), dim=1)  # batch_size x F+3 x 2
RuntimeError: Could not run 'aten::_cat' with arguments from the 'QuantizedCPUTensorId' backend. 'aten::_cat' is only available for these backends: [CUDATensorId, CPUTensorId, VariableTensorId].

I meet same question, have you solve the question?

did you use FloatFunctional( to replace the call to