Transfer Learning for smaller images

I am pretty new to Pytorch and I have a question with regards to transfer learning from one of the models downloaded from torchvision.

I understand that the torchvision models were originally trained on images [3 x 224 x 224] in size, but I am looking to adopt one of the models with fewer layers on images [1 x 28 x 28]. The image dataset that I have is very small in size.

What I did firstly was to repeat my first layer 3 times to create a [3 x 28 x 28], and I am using the vgg11 because it is one of the smaller models. However, I am running into the following issue now: Given input size: (512x1x1). Calculated output size: (512x0x0). Output size is too small. My images are way too small for even vgg11.

I understand that there is a way - that is to resize the image a bit bigger to [1 x 128 x 128] and it should work since the newest version of torchvision does allow input tensors that are 224 in height and width. But I believe that expanding images will make my image features more blurry due to the drop in resolution and I think it may impact the model.
Is there a way to trim any model so that it can be suitable for smaller images?

You could copy the torchvision VGG implementation from here and adapt it to your needs (e.g. by removing/replacing layers etc.).
Another approach would be to create the torchvision model object and manipulate the layers directly, but I think this approach would be more suitable for smaller changes, such as replacing the last linear layer for transfer learning.

I had the same problem. Choosing the best solution Id say depends on how small the image is. For (1, 28, 28) I just padded the images to the minimum VGG11 dimensions: (in_channels, 32, 32).

For MNIST:
image_dim = (1,28,28)
So need to add 4 pixels

I used:

import torchvision.transforms as transforms
padding_func = transforms.Pad(2)
x = padding_func(x)

In most cases probably best to just add it to the transforms:

transform = transforms.Compose([
    transforms.ToTensor(),
    transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
    transforms.Pad(2)
])

test_set = torchvision.datasets.CIFAR10(..., transform=transform)