RuntimeError: _thnn_conv2d_forward is not implemented for type torch.CharTensor

import torch

kernel2 = 2*torch.ones((3,3), dtype=torch.int8)
kernel2 = kernel2[None, None, ...]

X = torch.zeros((1,3,3), dtype=torch.int8); X[:,1,1] = 1;
X = X[:,None,:,:]

F.conv2d(X, kernel2, padding=1)

RuntimeError: _thnn_conv2d_forward is not implemented for type torch.CharTensor

I am guessing conv2d can’t handle dtype=int8. Anyone know workaround and why wouldn’t this be implemented?

PyTorch layers take torch.FloatTensor as input. So instead of using dtype=torch.int8, use torch.float32.

Yes, I am aware you can always do it in float32 but if i know the values are int and bounded, this maybe a waste in terms of memory.

In my experience the fallback THNN conv is much less memory-efficient than the specialized implementations of nnpack/mkldnn/cudnn etc.
As such you’re probably wasting memory if you don’t use them.

That said, you could try unfold and fold with matrix multiplication. This is somewhat similar to what THNN does (at least in some cases).

Best regards