Transfer Learning: Fine tuning the convnet vs ConvNet as feature extractor

In the Transfer Learning for Computer Vision Tutorial: ConvNet as fixed feature extractor, do we need the .fc in the code below?

# Observe that only parameters of final layer are being optimized as
# opposed to before.
optimizer_conv = optim.SGD(model_conv.fc.parameters(), lr=0.001, momentum=0.9)

Or it can be re-written using model_conv.parameters()?

I’m asking because I tested with and without and results look like the same. And most important, as the tutorial had previously frozen all parameters from all layers except the ones in the fc, model_conv.fc.parameters() should be equivalent to model_conv.parameters().

Can someone confirm please? Thanks!

The behavior will be the same in this use case. However, have a look at this post for a bit more detail on edge cases.

1 Like