Reinitiate the weights and biases

I have a MLP network and i want to use the same MLP with different seed value. I want to know if i can reinitiate the weights and biases values as it should be in start while calling the MLP in loop.

Many of the standard PyTorch modules have a .reset_parameters() method performing the (re-)initialization of that module (but not the submodules, so you would need to traverse with .apply(fn) on the root module). But this is a convention and not enforced.
Another method could be a for loop over .named_parameters() and set the parameter in a .no_grad block. This is particularly useful if you have your own ideas about how to initialize weights (you should!).

Best regards

Thomas