Error related to the kernel size and padding for a conv2D

I have a tensor with the size of (batch_size= 16,channels= 192,H=7,W=7). I want to decrease the number of channels to 160 channels but having the same height and width. The my class is the main class that I use for running the experiment and the x (the mentioned tensor), is the input to the my class:

class MyConv2d(nn.Module):
def __init__(self, in_planes, out_planes, kernel_size, stride, padding=0):
    super(MyConv2d, self).__init__()
    self.conv = nn.Conv2d(in_planes, out_planes, kernel_size=kernel_size, stride=stride, padding=padding, bias=False)
    self.bn = nn.BatchNorm2d(out_planes, eps=1e-3, momentum=0.001, affine=True)
    self.relu = nn.ReLU()

def forward(self, x):
    x = self.conv(x)    # The error happens here
    x = self.bn(x)
    x = self.relu(x)
    return x

class my(nn.Module):
    def __init__(self, in_channel=192, out_channel=160, out_sigmoid=False):
        super(my, self).__init__()
        
        self.out_sigmoid=out_sigmoid
        
        self.deconv = self._make_deconv(in_channel, out_channel)
     
    def _make_deconv(self, in_channel, out_channel, kernel_size=3, stride=1, padding=1):
        layers=[]
        layers.append(MyConv2d(in_channel, out_channel ,kernel_size=kernel_size, stride=stride, padding=padding))
        
        return nn.Sequential(*layers)
    
    def forward(self, x):
        x=self.deconv(x)
        return x

Unfortunately I face with the following error:

How can I fix the problem?

The error message doesn’t point to the kernel size and padding, but to a mismatch in the number of input channels. Your current input seems to have 192 channels while 768 are expected. Use in_planes=192 and it should work.