RuntimeError: Expected 4-dimensional input for 4-dimensional weight 64 3 7 7, but got 3-dimensional input of size [3, 224, 224] instead

from torch.autograd import Variable
import torch.nn.functional as F
from PIL import Image
data_transforms = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.5,0.5,0.5], [0.5,0.5,0.5])
])


My code:
image='dudth.jpg'
image = Image.open(image).convert('RGB')
image=data_transforms(image)
image.unsqueeze(dim=0)
imgblob = Variable(image)
torch.no_grad()
predict = F.softmax(model_ft(imgblob))
print(predict)

image.unsqueeze(dim=0) -> image = image.unsqueeze(dim=0)
It is not an in-place operator.

1 Like