Input Single Image to Pytorch CNN Model

I have implemented the model for CIFAR for dateset having 32x32 images size. While testing this model in batch gives correct prediction.

But while giving single image, it is given the following error:
ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 512, 1, 1])

![Screenshot (110)|690x387]

This error is raised by batchnorm layers, if the batch statistic cannot be calculated from a single input value.
If you are testing the model, you should call model.eval(), which will put all modules in evaluation mode, such that e.g. the batchnorm layers will apply their running estimates instead of calculating the stats from the current input batch.
Thus you could use the model even for single input images.

Thanks I solved the previous error with your help. However,I am facing a new error now =>Dimension out of range (expected to be in range of [-1, 0], but got 1)

output seems to have a single dimension, which seems to be wrong.
Did you delete the batch dimension?
If not, could you print(output.shape) in your code before trying to compute the predictions?
For a multi-class classification, the output should have the shape [batch_size, nb_classes].

Check if you code for all squeeze() calls, which might have removed the batch dimension of 1.