Can optimizer of one model be used in another model?

I know that when we create an optimizer-1, we pass model (model-1) parameters to it and the optimizer-1 optimizes those parameters.

After the parameters of model-1 are optimized, what if we create a model-2 and another optimizer-2 by passing parameters of model-2 to optimizer-2 and then additionally create a copy of state_dict of optimizer-1 and assign it to optimizer-2?

Will the optimizer-2 have the same learning rate and other optimizing parameters the same as optimizer-1 ? Would the optimizer-2 “continue” optimization from the point where optimizer-1 stopped?

Yes, that is the expected behaviour.
You are probably looking for this - torch.optim.Optimizer.load_state_dict — PyTorch 1.13 documentation

Thank you for the confirmation!