LBFGS optim cant deal with multiple returns in closure

I found an issue using LBFGS optimizer.
I need to return some tensors computed inside closure along with the loss value. Adam and SGD work well but LBFGS can’t handle the extra tensors and returns:

TypeError: float() argument must be a string or a number, not ‘tuple’

Toy example:

import torch
X=torch.tensor([1,2,3,4], dtype=torch.float32)
Y=torch.tensor([2,4,6,8], dtype=torch.float32)
w=torch.tensor(0.0, dtype=torch.float32, requires_grad=True)

# model prediction
def forward(x):
    return w*x

print(f'Prediction before training: f(5) = {forward(5):.3f}')

# training
lr = 0.1
nit=100

loss= torch.nn.MSELoss()
#optimizer=torch.optim.SGD([w],lr=lr)
#optimizer=torch.optim.Adam([w],lr=lr)
optimizer=torch.optim.LBFGS([w],lr=lr)

def closure():

    optimizer.zero_grad()
    y_pred=forward(X)
    myloss=loss(Y,y_pred)
    myloss.backward()
    #print('ypred is:',pred)
    return myloss,y_pred

for epoch in range(nit):

    l,pred=optimizer.step(closure)
    
    if epoch %10 ==0:
        print(f'epoch {epoch+1}: w = {w:.3f},. loss = {l:.8f}')

print(f'Prediction after training: f(5) = {forward(5):.3f}')
print('ypred:',pred)

Example adapted form:
https://github.com/python-engineer/pytorchTutorial/blob/master
File:06_1_loss_and_optimizer.py

See LBFGS.step.
There are lines as follows:

orig_loss = closure()
loss = float(orig_loss)

It expects that output of closure to be a number.
Other methods are different in implementation of step function.