How to correctly use model weights outside of forward in distributed training set-up with DDP?

I am using DDP to set up a distributed training script where I need to use some of the weights of my model outside of the forward method. What is the best practice to do this in order to avoid making a parameter ready twice?