ULMFit fine-tuning freeze

Hello,

I have trained a language model and now I want to fine-tune this pre-trained model.

model.summary()
model.load_encoder('lmtest') 
model.freeze()
model.summary()

Before loading the encoder I look at model summary and once again after loading and freezing. However in both summaries trainable modules and number of parameters are still same. I would expect freeze() to set everything other than last layer to non-trainable if I understood correctly. So why does not freeze() change anything (visible)?

I am quite new to pytorch and I would appreciate your guidance.

After freeze, try printing,

for name, params in model.named_parameters():
    if params.requires_grad:
        print(name)

Thanks a lot! I tried it and got the following error:

'RNNLearner' object has no attribute 'named_parameters'

This post says it might be an indentation error but I checked and there was none.

What’s RNN Learner? Also if it’s a model which inherits from nn.module then the above 3 line will definitely run.

It is from ULMFit architecture and takes Sequential.RNN which inherits nn.Sequential as an input as I understand.

Try this,

for name, params in your_learner_object.model.named_parameters():
    if params.requires_grad:
        print(name)

Thanks a lot! It works now!
It only prints parameters from the last layer so it means freeze() works I think :slight_smile:

1.layers.0.weight
1.layers.0.bias
1.layers.2.weight
1.layers.2.bias
1.layers.4.weight
1.layers.4.bias
1.layers.6.weight
1.layers.6.bias