This is how my feval
function is setup:
num_calls = [0]
while num_calls[0] <= params.num_iterations:
def feval():
# Forward
# Backward
return loss
optimizer.step(feval)
The loss variable contains:
Variable containing:
913.8555
[torch.FloatTensor of size 1]
torch.Size([1])
The error message with L-BFGS:
Traceback (most recent call last):
File "st.py", line 208, in <module>
optimizer.step(feval)
File "/usr/local/lib/python3.5/dist-packages/torch/optim/lbfgs.py", line 192, in step
t = min(1., 1. / abs_grad_sum) * lr
TypeError: unsupported operand type(s) for *: 'float' and 'dict'
Using Adam results in:
Traceback (most recent call last):
File "st.py", line 208, in <module>
optimizer.step(feval)
File "/usr/local/lib/python3.5/dist-packages/torch/optim/adam.py", line 76, in step
step_size = group['lr'] * math.sqrt(bias_correction2) / bias_correction1
TypeError: unsupported operand type(s) for *: 'dict' and 'float'
What’s causing this issue and how do I resolve it?