Transfer Learning for smaller images

I am pretty new to Pytorch and I have a question with regards to transfer learning from one of the models downloaded from torchvision.

I understand that the torchvision models were originally trained on images [3 x 224 x 224] in size, but I am looking to adopt one of the models with fewer layers on images [1 x 28 x 28]. The image dataset that I have is very small in size.

What I did firstly was to repeat my first layer 3 times to create a [3 x 28 x 28], and I am using the vgg11 because it is one of the smaller models. However, I am running into the following issue now: Given input size: (512x1x1). Calculated output size: (512x0x0). Output size is too small. My images are way too small for even vgg11.

I understand that there is a way - that is to resize the image a bit bigger to [1 x 128 x 128] and it should work since the newest version of torchvision does allow input tensors that are 224 in height and width. But I believe that expanding images will make my image features more blurry due to the drop in resolution and I think it may impact the model.
Is there a way to trim any model so that it can be suitable for smaller images?

You could copy the torchvision VGG implementation from here and adapt it to your needs (e.g. by removing/replacing layers etc.).
Another approach would be to create the torchvision model object and manipulate the layers directly, but I think this approach would be more suitable for smaller changes, such as replacing the last linear layer for transfer learning.