Is there a standard way of accessing the loss from an optimizer class?

I am implementing improved Rprop with weight backtracking and I would like to follow the standard way of PyTorch to do so. I see that a closure function can be used inside the step function of an optimizer class to get access to the loss. However, I have no need to calculate the gradients again, I just want to have access to the loss value. Is there anyway to do that other than passing the loss as an argument in the step function?

Hey, @p-enel

As long I can see from the source code torch.optim.Optimizer has nothing to do with loss it’s only updates the params based on gradients. So, when the user calls loss.backward() in a

for input, target in dataset:
    optimizer.zero_grad()
    output = model(input)
    loss = loss_fn(output, target)
    loss.backward()
    optimizer.step()

parameters that are tracked by optimizer are already have everything for a step().

Regarding the closure. LBFGS calculates loss multiple times in its work but the SGD doesn’t eve expect the loss to come.

In short I’d say if you need to work with an exact loss values you should pass it as a closure like it’s done in LBFGS, and please refer to its source code