Hi,
I have created the following optimizer.
optimizer = optim.SGD(model.parameters(), lr=learning_rate, momentum=momentum)
Based on a condition, I am updating the weights for the new inputs. Whenever I update the weight, I am running through the following code body.
optimizer = optim.SGD(model.parameters(), lr=learning_rate, momentum=0.9) # doubt here
optimizer.zero_grad()
inputs_new, _ = data_list[max_index]
outputs_new = model(inputs_new)
loss = criterion(outputs_new, torch.tensor([y]))
loss.backward()
optimizer.step()
Is it correct to instantiate optimizer again and again (the line with the comment)? Is it a wrong practice? Is it logically wrong?
Thanks in advance.
i dont see anything wrong here,it should work fine.
R u facing any problems mate?
Oli
(Olof Harrysson)
June 21, 2019, 5:55am
3
I guess the momentum wouldn’t work very well. Never seen anyone doing that before. Most people don’t reinitialise it
Hi ,
The accuracy has gone down by nearly 20 % in the first case. (With one initializer)
Thanks
Thanks for the input. Actually, the momentum works well in that case.
accuracy depends on other parameters also, i dont think re-instantiate optimizer again and again is a good practice but you then there is nothing in doing that.
can you post the full code and tell what exactly you r doing?
This should help. I had a similar question recently…
You shouldn’t be doing that, right? fmi as well as my issue here
1 Like