Does pytorch support training with low-precision INT8?

I am trying to experiment low-precision training - INT8. Is there support for this in pytorch?. Also what quantization methods are supported?


Training with int8 - no chance due to numerical stability limitations I guess, but inference on int8 is very interesting.

however when I use INT8 to compute, it doesn’t faster even slower, i wonder why