How can I accumute the total loss if using a closure for the optimizer?

I was doing total_loss += loss.detach() after optimizer.step() but now I use a closure.
How can I have access access to the loss now?

I use the total_loss to know when to stop the learning process.

Hi,

I would use a variable from a common parent scope to store it.
Reset it before the training and each closure function modify this variable by adding to it. For example by making total_loss a global variable if you use python 2.7 or nonlocal in python 3.5+.

1 Like

Dear @albanD thank you for your reply. :grinning:
Does the total_loss will still be accurate for optimizer that require a closure?

I didn’t really understand why a closure is necessary for certain optimizer
but from my current understanding the optimizer will run the closure multiple times each step.
Does the total_loss will still be a valid way to stop training in that case?

Thank you so much

Hi,

That will depend on your problem definition and the optimizer.
In some case, you may only want to use the first eval of the closure by the optimizer.
In other cases, you can just average over all the evaluations as it is still valid information.
That would depend on your application.

1 Like

Hi, I look through the questions about the closure, but I still don’t understand why to use closure. Do you have some classicial and simple examples? For example, if you want to do xxx, you have to use closure, or other examples like this. Thank you so much!

Hi,

Optimizers that need to evaluate the function multiple times will require closure. For example LBFGS.