I was testing a basic neural network one hidden layer. I tried various loss functions in the same session and the net got stuck in what I assume was a local min but it was completely wrong. I restarted the session and 1 epoche was better than 1000. I am using ADAM. My belief is that changing the error function drastically caused this mistake. I was wondering if anyone has encountered this problem in the past and how they dealt with it; and the command used to reset weights within a session.
Did you change the loss function during the training?
Sometimes a run might be better than another one. It depends highly on your use case (i.e. dataset, model etc.).
Setting the random seed at the beginning of your script helps dealing with these issues.
You can reset the weights by just creating the model again or by re-initializing it with an init function:
def weights_init(m): if isinstance(m, nn.Conv2d): torch.nn.init.xavier_uniform(m.weight.data) torch.nn.init.xavier_uniform(m.bias.data) model.apply(weights_init)
Looking back I do think I changed the weight function during training. This is super helpful thank you!!