Change DenseNet169 input size to 320x320


(Fahad Ahmed Khan) #1

Hi! I’m trying to implement the baseline model described in the Stanford MURA paper. The following is the code for my model:

model = densenet169(pretrained=True)
    
model.classifier = nn.Sequential(
                        nn.Linear(in_features=1664, out_features=1),
                        nn.Sigmoid()
                   )

model = model.cuda()

This works fine with the default input size of 224x224. However, the paper describes the input size to be 320x320. However, when input of this shape is fed to the model defined in the code above, it unsurprisingly throws a shape mismatch error. I’m new to PyTorch (and deep learning in general) and I’m having trouble implementing this. Can someone please help? I’ve looked at other solutions proposed for ResNet and other networks but I haven’t been able to apply any of those to my DenseNet169.


#2

Most likely you’ll get the tensor shapes for the tensors where the mismatch occurred.
In your case, it’s:

RuntimeError: size mismatch, m1: [1 x 26624], m2: [1664 x 1] at …

If you change the in_features of your linear layer to 26624, your code will run successfully with 320x320 shaped inputs.

Another approach would be to print the shape of the incoming tensors using this small module:

class Print(nn.Module):
    def __init__(self):
        super(Print, self).__init__()
        
    def forward(self, x):
        print(x.shape)
        return x

model.classifier = nn.Sequential(
    Print(),
    nn.Linear(in_features=26624, out_features=1),
    nn.Sigmoid()
)

(Fahad Ahmed Khan) #3

Hi, @ptrblck. Thanks for the prompt response. This worked! Thank you so much.