Custom regularizer

I want to make sure that a model that I am training does not diverge from my main model.
I was thinking that if I could add [parameters of main model] - [parameters of the model], as a regularizer, then I might be able to do this.
However, I am not sure if there is any better way to do that.
Is there any implementation similar to that in Pytorch?

Would it be possible to calculate a p-norm between the parameters as described here and add it to your loss?