Set init weights to complicated model using weights from .pkl file

I’m wonder is that a correct way to init weights from pickled file for my model?
.PKL file was saved after last epoch of previous training of the same model (unfortunatelly, .ckpt file lost)

model = FaultNetPL(batch_size = 5).cuda()
model_w = torch.load('model_weights.pkl')
model.load_state_dict(model_w )

trainer.fit(model)

I tried to look for analogous questions, but answer always is to use weight_init method (which is (as I understood) iterate over all of layers of the model and set weights to each if them semi-manually)

It depends, what model_weights.pkl contains. If it was stored using model.state_dict(), your approach would work. On the other hand, if it’s containing raw tensors or a custom dict etc. you would need to load it manually using the same logic which was applied while storing these values.

Ty for your reply, @ptrblck. Is it ok that my model loss on the 1st epoch increased a little bit comparing to loss I got at the end of the last training (model_weights.pkl I got from)
My intuition behind such behavior is that I didn’t restored optimizer and dataloader parameters …

Yes, both of your suggestions might be valid in case you are continuing the training.
Assuming you are checking the training performance of the model, the order of the samples would matter, since each update depends on them.
Also, if you are not restoring the optimizer, the first update step might increase the loss, especially if internal states were used in the optimizer.

However, the validation set should yield the same result up the the limited floating point precision.
Make sure you are calling model.eval() before calculating the validation performance.