Input type disagreed with the model type

I quantized both my model and input to “int8”, but an error generated,

why does the model require that input type is float

could you provide more context, ideally a reproducible example?

Hi, I’ve found the error and remove the “normalize”, it worked.