RuntimeError: Given groups=1, weight of size 16 16 3 3, expected input[16, 64, 222, 222] to have 16 channels, but got 64 channels instead

I try to run the following program for image classification problems in Pytorch. I am new to PyTorch and I am not getting what’s wrong with the code. I tried reshaping the images but no help. I am running this code with Cuda. I have around 750 classes and 10-20 images in one class. My dataset is a benchmark dataset and every image has a size of 60*160.

Getting this error and I don’t know where to make changes. Given groups=1, the weight of size 16 16 3 3, expected input[16, 64, 222, 222] to have 16 channels but got 64 channels instead.

The input you give is of size [16, 64, 222, 222]. Where by convention, the dimensions are batch, channels, heigh, width.
As you can see, your image has 64 channels.
But the convolution you give it to has been created to expect 16 channels for the input. Hence the error.

You need to make sure that the number of channels expected by the convolution and in the image you give are the same.