I attended a talk by Facebook at NeurIPS and they mentioned that they support int64 rather than int8. (The goal of supporting int64 was to enable cryptoTensors, but you can still use it for regular tensors or models)
and the speaker also mentioned that the PyTorch team developed FBGEMM (Facebook GEneral Matrix Mmultiply) library I recall him saying it supports integer convolution but not sure if it’s integrated into the PyTorch’s framework: