When does Pytorch initialize parameters?

Hi, I’m now writing my own network with Pytorch. And I want to use a pretrained model in my net. Here is my overwriting __init__() code:

class Generator(nn.Module):
    def __init__(self) -> None:
        super(Generator, self).__init__()
        model_path = "somedir"
        chekpoint = torch.load(model_path)
        h_model = H_model()
        h_model.load_state_dict(chekpoint['model'])
        # 设置为测试模式
        h_model.eval()
        self.H_model = h_model
        self.unet = UNet(enc_chs=(9,64,128,256,512), dec_chs=(512, 256, 128, 64), num_class=3, retain_dim=False, out_sz=(304, 304))

Here, the h_model is loaded from checkpoint which I’ve trained it well. My question is that after the initialization, will the parameter in h_model changed? And why(I mean how does Pytorch treat self-defined layer when it initializes parameters? And when does Pytorch initialize parameters?)

The nn.Module subclasses initialize their parameters in the __init__. For many modules in PyTorch itself, this is typically done by calling a method reset_parameters. So your code snippet should train starting from the checkpoint.
Note that the standard initialization of many standard modules can be considered to not be state of the art. Thus applications very likely override the initialization either in the higher-level __init__ or some other place in the code between instantiation and training start.

Best regards

Thomas