Dealing with 1 channel images

I have a tensor of shape [1291162, 1, 28, 28] where 1 denotes channel. It has 28x28 dimensional images.
I have created a simple custom model to debug for this:

def conv_layer(ni,nf,kernel_size=3,stride=1):
    return nn.Sequential(
            nn.Conv2d(ni,nf,kernel_size=kernel_size,bias=False,stride=stride,padding=kernel_size//2),
            nn.BatchNorm2d(nf,momentum=0.01),
            nn.LeakyReLU(negative_slope=0.1,inplace=True)
        )

class Model(nn.Module):
    def __init__(self, ni,num_classes):
        super(Model, self).__init__()
        self.conv1 = conv_layer(ni,ni//2,kernel_size=1)
        self.conv2 = conv_layer(ni//2,ni,kernel_size=3)
        self.classifier = nn.Linear(ni*8*4,num_classes)

    def forward(self, x):
        x = self.conv2(self.conv1(x))
        x = x.view(x.size(0),-1)
        return self.classifier(x)

When i do model(28,num_classes) which will be taken as ni by my model, it throws an error saying RuntimeError: Given groups=1, weight of size [14, 28, 1, 1], expected input[1291162, 1, 28, 28] to have 28 channels, but got 1 channels instead. I thought Pytorch expects channel in the second dimension like [batch_size,n_c,H,W] but it works when i permute the axis, Also is there any pre-trained model that works for 1 channel image ?

ni sets the number of input channels to Conv2d, your tensor has 1 channel but you’re trying to set the Conv2d to take input of 28 channels.

You should set it to 1 given that you want to feed tensors with 1 channel and then make sure the number of output channels (nf) is doubling instead of halving.