RuntimeError: expected scalar type Float but found Int? Why does torch.matmul support only float32?

Hi,

I was wondering about the reason behind torch.matmul being only supported for float in PyTorch? I would really appreciate if someone can describe the best work around for it without losing performance.

Thanks

This is a known upstream issue: Add support for integer matrix multiplication (particularly for dtype = torch.int8 ) · Issue #29961 · pytorch/pytorch (github.com)

As for a workaround if you know the dynamic range of your integers you can do exact integer accumulation in float16 from to -2**11 to 2**11 and float32 from -2**24 to 2**24.

1 Like