Missing keys & unexpected keys in state_dict when loading self trained model

Can you explain more clearly how to wrap the current model in nn.Dataparallel? Like can you give an example?
I have the error Missing key(s) in state_dict: and actually when I save the model, I just use torch.save().
I am new to pytorch, thanks so much!