How to predict own picture with a trained model?

I want to use my own trained model to predict the new picture, but the following problem has arisen. I don’t know where it is wrong, please help me.

Error message:File “/opt/anaconda3/lib/python3.6/site-packages/torch/nn/modules/conv.py”, line 301, in forward self.padding, self.dilation, self.groups)
RuntimeError: Expected 4-dimensional input for 4-dimensional weight [64, 3, 7, 7], but got input of size [1, 224, 224] instead

My code is like this:

Import needed packages

import torch
import torch.nn as nn
import torchvision.models as models
import numpy as np
from torch.autograd import Variable
from torchvision import datasets, transforms
import torch.nn.functional as F
import torchvision.utils as vutils
from io import open
import os
from PIL import Image

Load Data

data_transforms = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.5,0.5,0.5], [0.5,0.5,0.5])
])

model_ft = torch.load(“model.pkl”)

imagepath = ‘./20170522/images1.1(074047-074310)/CPI__20170522_074049_909/001.png’
image = Image.open(imagepath)
imgblob = data_transforms(image)
imgblob.unsqueeze_(dim=0)
imgblob = Variable(imgblob)
torch.no_grad()
predict = F.softmax(model_ft(imgblob))
print(predict)

It looks like the channel dimension is missing in your input.
Could it be you’re loading a grayscale image?
If so, try to load your image as an RGB image using:

image = Image.open(imagepath).convert('RGB')

Your suggestion is very useful, thank you very much