Modifiying ResNet is very easy and more powerful (than VGG).
This is a copy of official pytorch implementation
class ResNet(nn.Module):
def __init__(self, block, layers, num_classes=1000):
self.inplanes = 64
super(ResNet, self).__init__()
self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3,
bias=False)
self.bn1 = nn.BatchNorm2d(64)
self.relu = nn.ReLU(inplace=True)
self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
self.layer1 = self._make_layer(block, 64, layers[0])
self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
self.layer4 = self._make_layer(block, 512, layers[3], stride=2)
self.avgpool = nn.AvgPool2d(7, stride=1)
self.fc = nn.Linear(512 * block.expansion, num_classes)
you just have to check self.conv1 to
self.conv1 = nn.Conv2d(1, 64, kernel_size=7, stride=2, padding=3,
bias=False)
And that’s all.
I am afraind that there is no fine tunning… you would be training from the scratch.
ResNet input is 224x224 by default. Code will run with 64 by 64 of course but all the pretraining would be not very useful.
You should also consider what are you using this net to. The output size using a 224x224 input is 8x8 (forgetting about fully connected and these stuff). Using a 64x64 input will generate a much smaller output.
Considering those facts, do the best choise