Multiple return variables in LBFGS closure

A typical LBFGS closure:

for input, target in dataset:
    def closure():
        optimizer.zero_grad()
        output = model(input)
        loss = loss_fn(output, target)
        loss.backward()
        return loss
    optimizer.step(closure)

But what if I need to return multiple variables in closure(), because I need to store and plot the loss and other variables etc out of the closure loop. However, I’m not allowed to write the following:

for input, target in dataset:
    def closure():
        optimizer.zero_grad()
        output = model(input)
        loss = loss_fn(output, target)
        loss.backward()
        u=loss+1
        return loss, u
    loss, u=closure
    optimizer.step(closure)

Hi Robin!

Define some empty lists:

myLosses = []
myUs = []

and have your closure store the results you want to plot in them for future use:

    def closure():
        optimizer.zero_grad()
        output = model(input)
        loss = loss_fn(output, target)
        loss.backward()
        u=loss+1
        myLosses.append (loss)
        myUs.append (u)
        return loss,

Best.

K. Frank

1 Like

Hi KFrank, many thanks. But I have to plot a lot of figures based on those variables, so it’ll be nicer to return them all and get back to them out of the loop. Is that impossible?

Hi Robin!

I didn’t make it explicit, but myLosses and myUs are meant to be defined
initially outside of the loop, so the values you store in them will persist
after the loop exits.

As a refinement to what I posted above, you should probably use
myLosses.append (loss.item()) and myUs.append (u.item())
(or similarly use .detach()) so that you don’t keep around references
to the computation graph.

Best.

K. Frank

1 Like

Hi Frank,
it seems that myLosses and myUs are meant to be defined
initially outside of the loop, but loss needs to be re-defined in the loop as well. In other words, the loss needs to be defined/given a value twice (in and out of the loop)? Is there any way to circumvent this redundancy?

Hi Robin!

I don’t understand what you are asking.

Here’s how you can do what I think you want to do:

>>> import torch
>>> print (torch.__version__)
2.1.0
>>>
>>> # outside of loop
>>> myLosses = []
>>> myUs = []
>>>
>>> for  i in range (1, 11):
...     loss = i * torch.tensor ([10.0])
...     u = loss + 1
...     # inside of loop
...     myLosses.append (loss.item())
...     myUs.append (u.item())
...
>>> # outside of loop
>>> myLosses
[10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 80.0, 90.0, 100.0]
>>> myUs
[11.0, 21.0, 31.0, 41.0, 51.0, 61.0, 71.0, 81.0, 91.0, 101.0]

Best.

K. Frank

Hi Frank,
thanks so much. However you used a normal loop and got things right. But when this loop is a LBFGS closure, the loss (defined in the closure loop) isn’t recognized to be a defined variable unless it’s redefined out of the closure loop as far as I found.

Hi Robin!

Please post a super-simplified, minimal, fully-self-contained, runnable script
that reproduces your issue, together with the output you get when you run it.

Best.

K. Frank

You might try using a class, as that contains a self reference in which you can call variables without needing to return them or pass them into the function. This seems to work:

import torch
import torch.nn as nn


class Trainer:
    def __init__(self):
        self.model = nn.Linear(10, 1)
        self.optimizer = torch.optim.SGD(self.model.parameters(), lr=0.1)
        self.loss_fn = nn.L1Loss()

        self.dataset = [(torch.rand(10, 10), torch.rand(10, 1)), (torch.rand(10, 10), torch.rand(10, 1))]

        self.loss_list = []
        self.u_list = []
        self.u = 0

    def train_loop(self):
        for input, target in self.dataset:
            self.input, self.target = input, target
            self.optimizer.step(self.closure)

    def closure(self):
        self.optimizer.zero_grad()
        output = self.model(self.input)
        loss = self.loss_fn(output, self.target)
        loss.backward()
        self.u+=1
        self.loss_list.append(loss.item())
        self.u_list.append(self.u)
        return loss

train = Trainer()
train.train_loop()

print(train.loss_list)
print(train.u_list)
1 Like

Hi J_Johnson, many thanks for your help! But if

        self.u+=1
        self.loss_list.append(loss.item())
        self.u_list.append(self.u)

are put in train_loop like:

def train_loop(self):
        for input, target in self.dataset:
            self.input, self.target = input, target
            self.optimizer.step(self.closure)
            self.u+=1
            self.loss_list.append(loss.item())
            self.u_list.append(self.u)

loss isn’t defined again! Why I need to put these lines out of closure is because LBFGS calculates loss for quite a few times in each epoch but I only want to store the final loss in each epoch.

In your original example, the definition “closure” is NOT being run until optimizer.step(closure). It is only being defined again and again. A definition does not run until it is actually called.

At this point, I’m unclear on what your objective is. If not the above, can you please update what you are trying to do.

oh sorry, then in my original example loss, u=closure is meant to follow optimizer.step(closure) rather than the other way round. I just need to get loss etc after closure is implemented.

for input, target in dataset:
    def closure():
        optimizer.zero_grad()
        output = model(input)
        loss = loss_fn(output, target)
        loss.backward()
        u=loss+1
        return loss, u
    optimizer.step(closure)
    loss, u=closure

Did you try running the example I provided? Because it creates a list of the losses that occurred inside of the closure at each step. If you want to add those up over each epoch, you can add the code to do so between epochs. Or make a definition in the class to reset those values:

def reset_stats(self):
    self.ulist = []
    self.u = 0

And just call that where appropriate.

Yes I did. I suddenly got it now. The essence is that one has to define what we want to carry out of closure as self.*!

1 Like