RuntimeError: Input type (torch.cuda.HalfTensor) and weight type (torch.cuda.FloatTensor) should be the same

I use torch.cuda.amp.autocast before model forward, But I get a error like topics.
The code is :

class SuperConv2d(nn.Conv2d):
    def __init__(self, in_channels, out_channels, kernel_size, stride=1,
                 padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros'):
        super(SuperConv2d, self).__init__(in_channels, out_channels, kernel_size,
                                          stride, padding, dilation, groups, bias, padding_mode)

    def forward(self, x, config):
        in_nc = x.size(1)
        out_nc = config['channel']
        weight = self.weight[:out_nc, :in_nc]  # [oc, ic, H, W]
        if self.bias is not None:
            bias = self.bias[:out_nc]
        else:
            bias = None
        return F.conv2d(x, weight, bias, self.stride, self.padding, self.dilation, self.groups)

What can I change to use amp for this module?
Is it F.conv2d support amp?

My environment is:

gpu: rtx2080ti
torch: py3.7_cuda10.1.243_cudnn7.6.3_0
cuda: 10.1

The code seems to work for me using your custom module:

conv = SuperConv2d(10, 10, 3, 1, 1).cuda()
x = torch.randn(10, 10, 24, 24).cuda()

config = {'channel': 2}

with torch.cuda.amp.autocast():
    out = conv(x, config)

print(out.dtype)
> torch.float16

Could you check, what might be the difference in my code snippet?

Maybe, The difference is my input is float16, but it is float32 in your case.
Because, I use the module inside the network. For me, the input of this module is float16.

Doesn’t seem to be the issue, as the input is not also float16:

plain_conv = nn.Conv2d(10, 10, 3, 1, 1).cuda()
conv = SuperConv2d(10, 10, 3, 1, 1).cuda()
x = torch.randn(10, 10, 24, 24).cuda()

config = {'channel': 2}

with torch.cuda.amp.autocast():
    out = plain_conv(x)
    print(out.dtype)
    out = conv(out, config)

print(out.dtype)