Instantiate with pre-trained layers vs loading weights

Consider the following pytorch model:

class MyModel(nn.Module):
    def __init__(self. layer1: nn.Module, layer2: nn.Module):
        self.l1 = layer1
        self.l2 = layer2
    
    def forward(self, x):
        return self.l2(self.l1(x))

With pre-trained lyr1 and lyr2 if I instantiate

model = MyModel(layer1=lyr1, layer2=lyr2)

Will the model in this case work as a pre-trained model preserving weights of lyr1 and lyr2? If not how do I construct a pre-trained MyModel instance from lyr1 and lyr2 objects?

I think it will work. If we do not want to update the weights of the pretrained layers during finetuning, we can set require_grad as False for finetuning on the selected layers. Example:

model = Model()
for param in model.l1.parameters():
    param.requires_grad = False
for param in model.l2.parameters():
    param.requires_grad = False

Is there any possibility of undefined behavior? Would it be wise to call model.l1.load_state_dict(lyr1.state_dict()) and model.l2.load_state_dict(lyr2.state_dict()) just to be safe?

Assuming lyrx are already properly initialized with the pretrained weights your code should work fine.
Loading the state_dict afterwards again would not be needed if it was already done.