LBFGS closure in TorchScript?

Hello folks,

I have a model which I’m refactoring to support torchscript for use in a c++ library. An issue I have run into is function definitions aren’t supported when using a closure for the LBFGS optimiser (which according to the docs is necessary). So I’m wondering if it’s possible to use the LBFGS optimiser in torchscript?

import torch

class Test(torch.nn.Module):
    def __init__(self):
        super(Test, self).__init__()      
        
    def forward(self, 
                input_: torch.Tensor, 
                target: torch.Tensor) -> torch.Tensor:
        
        optim = torch.optim.LBFGS(tensor)
        max_iter = 50
        n_iter = 0
        
        def closure():
            optim.zero_grad()
            loss = input_ - target
            loss.backward()
            n_iter += 1
            return loss  
        
        while n_iter <= max_iter:
            optim.step(closure)
        
        return tensor
    
    
test = torch.jit.script(Test())
test.save('test.pt')

The truncated output is:

UnsupportedNodeError: function definitions aren't supported:
  File "/tmp/ipykernel_4162/3211224755.py", line 15
        n_iter = 0
        
        def closure():
        ~~~ <--- HERE
            optim.zero_grad()
            loss = input_ - target
'Test.forward' is being compiled since it was called from 'Test.forward'
  File "/tmp/ipykernel_4162/3447201789.py", line 20
                target: torch.Tensor) -> torch.Tensor:
        
        optim = torch.optim.LBFGS(tensor)
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
        max_iter = 50
        n_iter = 0