Hi, A month ago I was able to pass 300x300 images to pretrained densenet model without fail, I would change the last layer that is densenet161.classifier = nn.Linear(in_features,7)
But now I am unable to forward pass 300x300 images to it. the output of the feature extractor is a 19872 size vector but the in_features expects 2208 size vector. What do I do ? Should I retrain my model for 224x224 size images ?
Adding code -
res = models.densenet161(pretrained=False)
res.classifier = nn.Linear(res.classifier.in_features,7)
data_transforms = transforms.Compose([transforms.Resize((300,300),Image.BILINEAR),transforms.ToTensor(),transforms.Normalize([0.485,0.456,0.406],[0.229,0.224,0.225])])
img = Image.open('bla.bmp') # this is a 640x480x3 Image
img2 = Variable(data_transforms(img).view(1,3,300,300).cuda())
res = res.cuda()
print(res(img2)) #error size mismatch