Resnet for small input images

I would like to test the resnet18 from torchvision with my own data (satellite imagery) since a lot of people seem to use it. However my training set is designed so as to get 65x65x4 (4 bands) images, and ResNet cannot accept under 224x224 images. Note that I cannot increase my training images size (i would otherwise just use similarly dimensioned images as resnet is used to process).
I guess that the successive strided convolutions reduce the size by a too large factor, but I am unsure about how to deal with it. If I remove the stride condition, I will (i) lose the pooling effect (ii) stray off the resnet definition.

Some questions about the pytorch implementation of resnet (from torchvision):

  • Why is there a maxpooling layer in addition to the strided conv:

    class ResNet(nn.Module):
      def __init__(self, block, layers, num_classes=1000):
      self.inplanes = 64
      super(ResNet, self).__init__()
      self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3,
      self.bn1 = nn.BatchNorm2d(64)
      self.relu = nn.ReLU(inplace=True)
      self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
  • there is no bottleneck layer in resnet18 and resnet34, why ?

Thank you !

1 Like