TypeError: unsupported operand type(s) for *: 'float' and 'dict'

This is how my feval function is setup:

num_calls = [0]
while num_calls[0] <= params.num_iterations:
    def feval():
        # Forward 
        # Backward 
        return loss
    optimizer.step(feval)

The loss variable contains:

Variable containing:
 913.8555
[torch.FloatTensor of size 1]

torch.Size([1])

The error message with L-BFGS:

Traceback (most recent call last):
  File "st.py", line 208, in <module>
    optimizer.step(feval)
  File "/usr/local/lib/python3.5/dist-packages/torch/optim/lbfgs.py", line 192, in step
    t = min(1., 1. / abs_grad_sum) * lr
TypeError: unsupported operand type(s) for *: 'float' and 'dict'

Using Adam results in:

Traceback (most recent call last):
  File "st.py", line 208, in <module>
    optimizer.step(feval)
  File "/usr/local/lib/python3.5/dist-packages/torch/optim/adam.py", line 76, in step
    step_size = group['lr'] * math.sqrt(bias_correction2) / bias_correction1
TypeError: unsupported operand type(s) for *: 'dict' and 'float'

What’s causing this issue and how do I resolve it?

The issue was with the values that I was providing both Adam and L-BFGS:

  optimizer = optim.LBFGS([img], optim_state)
  optimizer = optim.Adam([img], optim_state)
optim_state = None
if params.optimizer == 'lbfgs':
  optim_state = {
    "max_iter": params.num_iterations,
    "tolerance_change": -1,
    "tolerance_grad": -1,
  }
elif params.optimizer == 'adam':
  optim_state = {
    "lr": 1,
  }

Though I’m not sure how PyTorch’s optimizers expect the inputs to be setup as?

Hi,

You can find that in the doc for adam for example, LFGS is below on the same page.