Hi, A month ago I was able to pass 300x300 images to pretrained densenet model without fail, I would change the last layer that is densenet161.classifier = nn.Linear(in_features,7)
But now I am unable to forward pass 300x300 images to it. the output of the feature extractor is a 19872 size vector but the in_features expects 2208 size vector. What do I do ? Should I retrain my model for 224x224 size images ?
Adding code -
res = models.densenet161(pretrained=False)
res.classifier = nn.Linear(res.classifier.in_features,7)
data_transforms = transforms.Compose([transforms.Resize((300,300),Image.BILINEAR),transforms.ToTensor(),transforms.Normalize([0.485,0.456,0.406],[0.229,0.224,0.225])])
img = Image.open('bla.bmp') # this is a 640x480x3 Image
img2 = Variable(data_transforms(img).view(1,3,300,300).cuda())
res = res.cuda()
print(res(img2)) #error size mismatch
Where did you get the pretrained densenet model? (is it from pytorch? If so, there should be a pretrained 224 by 224 model somewhere)…
But if you trained a 300 x 300 image model yourself and now want to use 224 by 224 images you can rescale the images to 300 by 300 to work with the model with something like this.
I had earlier trained on 300 by 300 images. and yes I got it from pytorch model zoo. But now it only takes in 224x224 images… It is unable to take 300x300 images, or any other size . Only 224x224 images.
When I output size of the feature extractor output I get after passing a 300x300 image I get 1x2208x9x9 and then I pass this to inplace RelU and avg_pool with kernel_size=7 (as per source code of densenet), the error disappears when I change this kernel size to 9.
Consider using adaptive pooling http://pytorch.org/docs/master/nn.html#adaptivemaxpool1d so that output is the expected size, instead of picking the kernel size.