Input image with int?

Thanks for the advice, certaintly, it worked when I commented out both “set_input_quantized” and “set_output_quantized”.

But, actually, what I’d like to test was to test whether inference speed gets fast with “input data of int + model of int8” or not. A test with “input data of float + model of int8”, which is genral usage, is not what I’d liek to test…

To test the formaer one, I put “set_input_quantized” in the option. What I had to do was conversio of float32 to torch.quint8 of data?