PyTorch Transfer learning with Densenet

So, I’ve been trying to modify http://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html to work with DenseNet instead of ResNet, but I can’t seem to figure out what to change the fc layer to.

This is what I currently have:

model_ft = models.densenet169(pretrained=True)
num_features = model_ft.fc.in_features
model_ft.fc = nn.Linear(num_features, len(CLASS_NAMES))

but running the code gives me this error:

Traceback (most recent call last):
  File "pytorch_densenet.py", line 154, in <module>
    num_features = model_ft.fc.in_features
  File "/usr/local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 398, in __getattr__
    type(self).__name__, name))
AttributeError: 'DenseNet' object has no attribute 'fc'

So, uh… How do I change the last layer of DenseNet to work with Transfer Learning?

Thanks a lot.

2 Likes

Looking at the model https://github.com/pytorch/vision/blob/master/torchvision/models/densenet.py It looks like they assigned all of the conv layers to self.features and all of the and all of the classification layers to self.classifier.

In your case, model.features should give you the feature extractor that you want but not all of the torchvision models follow the same api. In the future you will just have to look at the model in that repo and figure out where the last conv layer is.

1 Like

Did you figure how to use densenet instead of resnet? I cannot find an example that does so.

I get the following error:

/scratch/sjn-p3/anaconda/anaconda3/lib/python3.6/site-packages/torchvision-0.2.1-py3.6.egg/torchvision/models/densenet.py:212: UserWarning: nn.init.kaiming_normal is now deprecated in favor of nn.init.kaiming_normal_.
Downloading: "https://download.pytorch.org/models/densenet161-8d451a50.pth" to /home/grad3/jalal/.torch/models/densenet161-8d451a50.pth
100%|██████████| 115730790/115730790 [00:04<00:00, 24886091.87it/s]

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-290-2e17f45b78dc> in <module>()
     12 
     13 
---> 14 num_ftrs = model_ft.fc.in_features
     15 model_ft.fc = nn.Linear(num_ftrs, 9)
     16 

/scratch/sjn-p3/anaconda/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __getattr__(self, name)
    516                 return modules[name]
    517         raise AttributeError("'{}' object has no attribute '{}'".format(
--> 518             type(self).__name__, name))
    519 
    520     def __setattr__(self, name, value):

AttributeError: 'DenseNet' object has no attribute 'fc'

for the following code:

######################################################################
# Finetuning the convnet
# ----------------------
#
# Load a pretrained model and reset final fully connected layer.
#

class_weights = torch.FloatTensor(weight).cuda()
#model_ft = models.resnet18(pretrained=True)
###model_ft = models.resnet50(pretrained=True)
model_ft = models.densenet161(pretrained=True)


num_ftrs = model_ft.fc.in_features
model_ft.fc = nn.Linear(num_ftrs, 9)

model_ft = model_ft.to(device)

criterion = nn.CrossEntropyLoss(weight=class_weights)

# Observe that all parameters are being optimized
optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)
###optim.Adam(amsgrad=True)
# Decay LR by a factor of 0.1 every 7 epochs
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)

actually I replaced these two lines and seems the training is happening but I am not sure if replacing resnet50 with densenet161 would only need these two line replacements:

###num_ftrs = model_ft.fc.in_features
num_ftrs = model_ft.classifier.in_features
###model_ft.fc = nn.Linear(num_ftrs, 9)
model_ft.classifier = nn.Linear(num_ftrs, 9)

2 Likes

Resnet uses the name fc for its last layer while Densenet uses the name classifier for its last layer. You may see these naming and indexing by printing out the model

model = models.densenet161(pretrained=True)
print(model)

This is well documented in pytorch tutorials. Here an extract with your solution (agreeing with what @rusty said above):

Preformatted text`To reshape the network, we reinitialize the classifier’s linear layer as model.classifier = nn.Linear(1024, num_classes)

Source: https://pytorch.org/tutorials/beginner/finetuning_torchvision_models_tutorial.html

To just add some extra information, if you want to use each block of DenseNet in your network, you can use the following:
pretrained_model = densenet121()
self.features = pretrained_model._modules[‘features’]
self.block = self.features._modules[‘denseblock1’]

You can add a customized classifier as follows:

  1. Check the architecture of your model, in this case it is a Densenet-161. Printing it yields and displaying here the last layers:

    )
    (denselayer24): _DenseLayer(
    (norm1): BatchNorm2d(2160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (relu1): ReLU(inplace=True)
    (conv1): Conv2d(2160, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (norm2): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (relu2): ReLU(inplace=True)
    (conv2): Conv2d(192, 48, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    )
    )
    (norm5): BatchNorm2d(2208, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (classifier): Linear(in_features=2208, out_features=1000, bias=True)
    )

So you can see that “norm5” has an output size of 2208, that is the output size of the Densenet-161 without classifier.

  1. You can verifiy the number features (equals output size) by
num_ftrs = model_transfer.classifier.in_features
num_ftrs

This prints again 2208.

  1. Now you can add your classifier to the network:
model_transfer.classifier = nn.Sequential(
                        nn.Linear(num_ftrs, 256),  
                        nn.ReLU(), 
                        nn.Dropout(0.4),
                        nn.Linear(256, n_classes),                   
                        nn.Softmax(dim=1))
  1. Verify by printing model with
model_transfer

yields:

(norm5): BatchNorm2d(2208, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(classifier): Sequential(
(0): Linear(in_features=2208, out_features=256, bias=True)
(1): ReLU()
(2): Dropout(p=0.4, inplace=False)
(3): Linear(in_features=256, out_features=133, bias=True)
(4): LogSoftmax()
)
)

Now you are ready to use your own Densenet-161!