I am using pretrained Resnet18 and I have two issues:
- No matter what image I input I get the same class (463) as output
- Docs say image must be at least 224,224 but for me anything above 224,224 throws size mismatch error while 200,200 and so on work.
I have normlaised manually with values given in docs.
resnet = models.resnet18(pretrained=True)
img=plt.imread('../input/bear.png')
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
img=scipy.misc.imresize(img,[244,244])
img=img/255.
img[:,:,0]=(img[:,:,0]-0.485)/0.299
img[:,:,1]=(img[:,:,1]-0.456)/0.224
img[:,:,2]=(img[:,:,2]-0.406)/0.225
img=torch.tensor(img)
img.transpose_(1,2)
img.transpose_(0,1)
img=img.unsqueeze_(0)
y=resnet(img[:,:,:220,:220].float())
print(y.max(),y.argmax(1))
y.argmax() always gives 463 with y.max() being around 2.0 . Vgg16 works fine on the otherhand. I am using pytorch on kaggle kernal.