Does pytorch supports int8 F.conv2d?

Following:

import torch.nn.functional as F
import torch
filters = torch.randint(0, 2, (8,4,3,3)).type(torch.CharTensor).cuda()
inputs  = torch.randint(0, 2, (1,4,5,5)).type(torch.CharTensor).cuda()
r = F.conv2d(inputs, filters, padding=1)
print(r)

Gives:
RuntimeError: getCudnnDataType() not supported for Char

I am getting the same problem for int32

Here they’re saying PyTorch doesn’t yet support int8: https://github.com/pytorch/pytorch/issues/26274

As I currently know if you need int8 for scoring you may use TensorRT it will “optimally” convert your fp32 version to int8 one

I attended a talk by Facebook at NeurIPS and they mentioned that they support int64 rather than int8. (The goal of supporting int64 was to enable cryptoTensors, but you can still use it for regular tensors or models)

and the speaker also mentioned that the PyTorch team developed FBGEMM (Facebook GEneral Matrix Mmultiply) library I recall him saying it supports integer convolution but not sure if it’s integrated into the PyTorch’s framework: