RuntimeError: Given groups=1, weight[64, 3, 3, 3], so expected input[1, 500, 500, 3] to have 3 channels, but got 500 channels instead

can someone please help me with this error
this is my CNN

class DnCNN(nn.Module):
    def __init__(self, channels, num_of_layers=17):
        super(DnCNN, self).__init__()
        kernel_size = 3
        padding = 1
        features = 64
        layers = []
        layers.append(nn.Conv2d(in_channels=channels, out_channels=features, kernel_size=kernel_size, padding=padding, bias=False))
    
        layers.append(nn.ReLU(inplace=True))
        for _ in range(num_of_layers-2):
            layers.append(nn.Conv2d(in_channels=features, out_channels=features, kernel_size=kernel_size, padding=padding, bias=False))
            layers.append(nn.BatchNorm2d(features))
            layers.append(nn.ReLU(inplace=True))
        layers.append(nn.Conv2d(in_channels=64, out_channels=3, kernel_size=kernel_size, padding=padding, bias=False))
        self.dncnn = nn.Sequential(*layers)
    def forward(self, x):
        out = self.dncnn(x)
        return out

this is a part of the training

 # Build model
    net = DnCNN(channels=3, num_of_layers=opt.num_of_layers)
    net.apply(weights_init_kaiming)
    criterion = nn.MSELoss(size_average=False)

the training code runs normally and it outputs an error only at the end of the first epoch
saying
RuntimeError: Given groups=1, weight[64, 3, 3, 3], so expected input[1, 500, 500, 3] to have 3 channels, but got 500 channels instead
as you can see i dont have any number of channels set to 500 and still it outputs this error…

Based on the error message it seems like you are passing an image tensor in channel-last ordering ([batch_size, w, h, c]), while PyTorch expects channel-first inputs as [batch_size, c, h, w].
Could you check, if all your inputs have this shape?

Yes that was the problem and I fixed it by resizing the input, thank you very much for your reply

Good to hear it’s working.
However, I’m a bit afraid when you say “resizing the input”, as a view or reshape won’t yield the desired output.
To swap the axes, use permute, if you haven’t already done so.

my input’s shape is [500,500,3], and since, as you said pytorch expects [batch_size,c,h,w] I resized the image by using np.resize(img,(3,500,500)) and then I converted it into a tensor by using torch.from_numpy(img) and then I applyed torch.unsqueeze(img,0), will this give the desired output ? or is there a problem?

Yeah, that might be problematic.
Have a look at this small example:

x = np.zeros((500, 500, 3))
x[200:300, 200:300, :] = 1.
plt.imshow(x)

x_reshaped = np.resize(x, (3, 500, 500))
print(x_reshaped.shape)
> (3, 500, 500)
print(x_reshaped.sum(1).sum(1))
> [    0. 30000.     0.]

x_transposed = x.transpose(2, 0, 1)
plt.imshow(x_transposed[0])
print(x_transposed.sum(1).sum(1))
> [10000. 10000. 10000.]

As you can see, the reshape operation will put all ones into the second channel, which is wrong.
Use transpose in numpy or permute in pytorch instead.

Okay thank you very much for your help