Model.forward(img) throws size mis match error


I am using vgg16 pre-train model . I have replaced the classifier with following

classifier = nn.Sequential(OrderedDict([
    ('fc1', nn.Linear(input_size, hidden_sizes[0])),
    ('relu_fc1', nn.ReLU()),
    ('fc2', nn.Linear(hidden_sizes[0], hidden_sizes[1])),
    ('relu_fc2', nn.ReLU()),
    ('output', nn.Linear(hidden_sizes[1], output_size)),
    ('LogSoftmax', nn.LogSoftmax(dim=1))
model.classifier = classifier

Now when I test the image

output = model.forward(toImg)

I get following error

/opt/conda/lib/python3.6/site-packages/torch/nn/ in linear(input, weight, bias)
990 if input.dim() == 2 and bias is not None:
991 # fused op is marginally faster
–> 992 return torch.addmm(bias, input, weight.t())
994 output = input.matmul(weight.t())

RuntimeError: size mismatch, m1: [1 x 18432], m2: [25088 x 4096] at /opt/conda/conda-bld/pytorch_1524584710464/work/aten/src/THC/generic/

Could you print the shape of toImg?
The standard size for vgg16 is [batch_size, 3, 224, 224].

Also, you shouldn’t use model.forward as this might yield strange behavior, if you want to use hooks.
Just call the model directly: output = model(toImg).

1 Like

yes, this resolves my error thank you ptrblck