Hi, I want to fine-tune some of the convolutional filters in ResNet-50. To achieve this, I need to freeze the other convolutional filters. As far as I know, I can only freeze convolutional “layers”, but not convolutional “filters”.
I already tried the following code, but it seems that it is impossible to make selected convolutional filters require gradient.
import torch
import torchvision
net = torchvision.models.resnet50(pretrained=True)
for layer_num, (name, param) in enumerate(net.named_parameters()):
if 'conv' in name:
param.requires_grad = False
param[0].requires_grad = True
print(param.requires_grad, param[0].requires_grad) # expected False True, but False False
Is there anyway to perform this kind of fine-tuning?
When you say, “filters” and “layers”, do you mean weights and biases? If that is the case, here is one way you might go about it:
import torchvision
import torch.nn as nn
net = torchvision.models.resnet50(pretrained=True)
with torch.no_grad():
for layer in net.modules():
if isinstance(layer, nn.Conv2d):
print(layer)
print(layer.weight.requires_grad)
layer.weight.requires_grad = False
print(layer.weight.requires_grad)
if not isinstance(layer.bias, type(None)): #Check if layer has no bias, otherwise you'll get an error that NoneType object has no requires_grad
print(layer.bias.requires_grad)
As a side note, the Conv2d layers in the ResNet-50 do not use bias because they use BatchNorm2d, and a bias would be redundant. To target those, just use: