Model keeps changing results for evaluation

I used a pretrained squeezenet model and retrained it on my data. While training I get a good training and validation accuracy.
Now for inference, either I run my model on one file at a time or I use dataloader, the output changes completely (I use model.eval). One time is very good, and 4 or 5 times (almost) random results.
I do the following:

model = torch.load('./models/SN_all.pth')
model.eval()

data_transforms = {
    'predict': transforms.Compose([
        transforms.RandomResizedCrop(input_size),
        transforms.RandomHorizontalFlip(),
        transforms.ToTensor(),
        transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
    ])
    }
dataset = {'predict' : datasets.ImageFolder('./data_test/val/', data_transforms['predict'])}
dataloader = {'predict': torch.utils.data.DataLoader(dataset['predict'], batch_size = 1, shuffle=False, num_workers=4)}
device = torch.device("cuda:0")

for inputs, labels in dataloader['predict']:
    inputs = inputs.to(device)
    output = model(inputs)
    output = output.to(torch.device('cpu'))
    index = output.data.numpy().argmax()

Do you see any issue in my code?
Thank you in advance.

Please check the results just with centercrop or resize

I feel this is the effect of RandomResizedCrop().
because will be random, so results will also be random.

1 Like

That’s right!
I have completely missed that while it was clear!
Thank you so much :slight_smile: