How to make output of int8 quantized model same as float32?

int8 model outputs just ints, but original model outputs float, how?