Ie. the problem is that the last layer has different names for different pretrained models, like “fc” for ResNet, “classifier” for DenseNet etc. I tried to access the layer like accessing an OrderedDict (d.items()…) but didn’t succeed.
I don’t believe there is a safe way to get the last layer, since even indexing the last module from model.children() would return the module initialized last and not necessarily the layer used last in the forward. Since your model could use different execution path in the forward (e.g. Inception models return either the final logits or additionally the aux. logits as well), the cleanest way would be to check the forward of each model and make sure you are indeed replacing the “last” layer.