Did pytorch support int16 quantization?

As per the documentation, PyTorch will support int8 quantization. Will PyTorch support int16 quantization currently?

We currently do not support int16 quantization. There is support for fp16 dynamic quantization.

1 Like