After static . quantization, I receive the following error-

RuntimeError: No function is registered for schema aten::thnn_conv2d_forward(Tensor self, Tensor weight, int[2] kernel_size, Tensor? bias, int[2] stride, int[2] padding) -> (Tensor output, Tensor finput, Tensor fgrad_input) on tensor type QuantizedCPUTensorId; available functions are CPUTensorId, VariableTensorId

raised by traced_script_module = torch.jit.trace(quantized, q_example, check_trace=False)

This is saying conv2d_forward is given a quantized tensor, which means some float Conv is not swapped as quantized Conv, can you print the quantized model and see where this happens?

I am not sure this is the same problem or not.
But for my example, when I forget to add â€śmodel.eval()â€ť, there is such error.
When I add â€śmodel.eval()â€ť, it seems fine.

Then comes the error: RuntimeError: No function is registered for schema aten::thnn_conv3d_forward(Tensor self, Tensor weight, int[3] kernel_size, Tensor? bias, int[3] stride, int[3] padding) -> (Tensor output, Tensor finput, Tensor fgrad_input) on tensor type CUDATensorId; available functions are CPUTensorId, VariableTensorId

if I load the model in cpu:

model = ResNet3D()
model.apply(weights_init)
output = model(input)
print(input.shape)
print(output.shape)