Pytorch mixed precision integer ops?

Does pytorch provide mixed precision integer operations? For example, if I have 2 int8 tensors, can I take the dot product into an int32 without overflowing? Can I do matrix multiplication into int32 where the necessary partial products are kept at proper precision to avoid overflow?

Or would I have to write these kernels from scratch at the C++ level?


quantized operations will be part of the 1.2 release. They are aimed to land in PyTorch master in May.

You can read the proposal here:

There are a few people working on it, you can follow some of the PRs. For example✓&q=is%3Apr+author%3Ajerryzh168+ is one developer who is filing and merging PRs to help enable this.

Thanks for the update. Do you know if RNN cells are well-supported?