Trained model, same output for all images


I’ve trained a slightly modified vgg11 model in order to classify 5 main landfrorms: forest, cities, rivers, mountains and plains.
I achieved 90% accurracy.
After I saved the model as .pth file, I tried to load it, but I get the same output values for any image I pass trough it:

Variable containing:
-0.3292 8.9667 -3.5820 0.6472 -5.6224
[torch.FloatTensor of size 1x5]

Why does this happen?

Code for loading the model and passing a random image from the test set:

def vgg11(pretrained=False, **kwargs):
	"""VGG 11-layer model (configuration "A")

		pretrained (bool): If True, returns a model pre-trained on ImageNet
	if pretrained:
		kwargs['init_weights'] = False
	model = VGG(make_layers(cfg['A_satelit']), **kwargs)
	if pretrained:
	return model

modelVGG_iulia = vgg11(pretrained=True)


img ="/home/iuliar/CERCETARE_NEURAL/test_mic/paduri/4.png").convert("RGB")
#pixels = img.load()
pixels = np.asarray(img)
pixels = np.reshape(pixels, (3, len(pixels[0]), len(pixels)))

pixels = pixels/255
pixels = np.expand_dims(pixels, axis=0)
x = torch.from_numpy(pixels)
x = x.type(torch.FloatTensor)
x = Variable(x)
output = modelVGG_iulia(x)

_, preds = torch.max(, 1)

print("Outputul este "+str(output))
print("Outputul este "+str(preds))

I saved the model like this:

if phase == 'valid' and epoch_acc > best_acc:
				best_acc = epoch_acc
				best_model_wts = model.state_dict(), "/home/iulia/CERCETARE_NEURAL/MODELE_SALVATE/model_"+str(epoch)+".pth")

I think you might set all pixel values to zero with this line:

pixels = pixels/255

Change it to

pixels = pixels / 255.

and run it again.