Pretrained model always returns constant outputs

After training the deep model and take it for inference. The pretrained model always returns the same result with different images.

Note that I have called model.eval() method after loading the model and before feeding images. So what’s the problem with my code?

from pprint import pprint

import pandas as pd
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision.transforms as transforms
from PIL import Image
from torchvision import models


def infer(img_file):
    model = models.resnet18(pretrained=False)
    num_ftrs = model.fc.in_features
    model.fc = nn.Linear(num_ftrs, 998)

    model = model.float()
    device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')

    model = nn.DataParallel(model)
    model.load_state_dict(torch.load('./model/ResNet18_Plant.pth'))

    model.eval()
    model.to(device)

    df = pd.read_csv('../label.csv')
    key_type = {}
    for i in range(len(df['category_name'].tolist())):
        key_type[int(df['category_name'].tolist()[i].split('_')[-1])] = df['label'].tolist()[i]

    img = Image.open(img_file)

    preprocess = transforms.Compose([
        transforms.Resize(227),
        transforms.RandomCrop(224),
        transforms.ToTensor(),
        transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
    ])

    img = preprocess(img)
    img.unsqueeze_(0)

    img = img.to(device)

    outputs = model(img)
    outputs = F.softmax(outputs, dim=1)

    # get TOP-K output labels and corresponding probabilities
    topK_prob, topK_label = torch.topk(outputs, 5)
    prob = topK_prob.to("cpu").detach().numpy().tolist()

    _, predicted = torch.max(outputs.data, 1)

    return {
        'status': 0,
        'message': 'success',
        'results': [
            {
                'name': key_type[int(topK_label[0][i].to("cpu"))],
                'prob': round(prob[0][i], 4)
            } for i in range(5)
        ]
    }


if __name__ == '__main__':
    pprint(infer('./test.jpg'))

THX for your review and answers.

can you look at outputs if they are all the same as well, regardless of the image?

Yes, they are all the same. It’s so strange.

Was the model properly working on your validation data during training?
If so, how was the training loss compared to the validation loss?

Well. The training loss and validation loss decrease normally, and the accuracy on validation set is about 0.89 with 998 categories. Everything goes well except for the weird inference code. :frowning_face:

I can’t see anything obviously wrong in your code.
Could you use this inference code to test your validation data and see, how it performs?