Using freezed pretrained resnet18 as a feature extractor for cifar10

Hello,

I am trying to use the pretrained resnet18 on cifar10 (training only the last fully connected layer):

	model = models.resnet18(pretrained=True)
	for param in model.parameters():
    		param.requires_grad = False

	num_ftrs = model.fc.in_features
	model.fc = torch.nn.Linear(num_ftrs, 10)
	optimizer = optim.Adam(model.fc.parameters())

Since resent expects 224x224 images while cifar10 is 32x32, I added a resize transformation in the data loading:

transform = transforms.Compose([transforms.Resize(224), transforms.ToTensor()])

However, the accuracy remains 10% after a long training. I suspects this is due to the resizing or the fact resnet18 expects images from a different distribution, possibly normalized. If this is the case, what transformation should I use?

1 Like

For sure resnets need normalization, see the data_transforms in the Transfer learning tutorial:
https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html#sphx-glr-beginner-transfer-learning-tutorial-py
where you’d use the ‘val’ form for your test data.

Also, I don’t see your loss or “criterion” function. You might want
criterion=nn.CrossEntropyLoss()