Load and freeze one model and train others

I have a model A that including three submodels model1, model2, model3.
the model flow: model1 --> model2 --> model3
I have trained model1 in an independent project.
The question is how to use the pre-trained model1 when training the model A?

Now, I try to implement this as follow:
I load the checkpoint of model1 by model1.load_state_dict(torch.load(model1.pth)) and then set the requires_grad of the model1’s parameters as False?
Is it right?

That would do it as far as the freezing and loading model1 is concerned.

1 Like

got it, thanks a lot.