Will the model in this case work as a pre-trained model preserving weights of lyr1 and lyr2? If not how do I construct a pre-trained MyModel instance from lyr1 and lyr2 objects?
I think it will work. If we do not want to update the weights of the pretrained layers during finetuning, we can set require_grad as False for finetuning on the selected layers. Example:
model = Model()
for param in model.l1.parameters():
param.requires_grad = False
for param in model.l2.parameters():
param.requires_grad = False
Is there any possibility of undefined behavior? Would it be wise to call model.l1.load_state_dict(lyr1.state_dict()) and model.l2.load_state_dict(lyr2.state_dict()) just to be safe?
Assuming lyrx are already properly initialized with the pretrained weights your code should work fine.
Loading the state_dict afterwards again would not be needed if it was already done.