TypeError: optimizer can only optimize Tensors, but one of the params is list

I needed to optimize my own loss using the Optimizer, but I ran into this problem

Traceback (most recent call last):
  File "train.py", line 138, in <module>
    fire.Fire()
  File "D:\Anaconda3\lib\site-packages\fire\core.py", line 141, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
  File "D:\Anaconda3\lib\site-packages\fire\core.py", line 466, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
  File "D:\Anaconda3\lib\site-packages\fire\core.py", line 681, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
  File "train.py", line 73, in train
    trainer = FasterRCNNTrainer(faster_rcnn).cuda()
  File "D:\python\monoSAR_rcnn\trainer.py", line 63, in __init__
    self.optimizer = self.faster_rcnn.get_optimizer()
  File "D:\python\monoSAR_rcnn\model\faster_rcnn.py", line 307, in get_optimizer
    self.optimizer = t.optim.SGD([params, {'params': self.awl.parameters(), 'weight_decay': 0}], momentum=0.9)
  File "D:\Anaconda3\lib\site-packages\torch\optim\sgd.py", line 68, in __init__
    super(SGD, self).__init__(params, defaults)
  File "D:\Anaconda3\lib\site-packages\torch\optim\optimizer.py", line 52, in __init__
    self.add_param_group(param_group)
  File "D:\Anaconda3\lib\site-packages\torch\optim\optimizer.py", line 230, in add_param_group
    raise TypeError("optimizer can only optimize Tensors, "
TypeError: optimizer can only optimize Tensors, but one of the params is list

this is my get_optimizer

    def get_optimizer(self):
        """
        return optimizer, It could be overwriten if you want to specify 
        special optimizer
        """
        lr = opt.lr
        params = []
        for key, value in dict(self.named_parameters()).items():
            if value.requires_grad:
                if 'bias' in key:
                    params += [{'params': [value], 'lr': lr * 2, 'weight_decay': 0}]
                else:
                    params += [{'params': [value], 'lr': lr, 'weight_decay': opt.weight_decay}]
        print(type(params))
        if opt.use_adam:
            self.optimizer = t.optim.Adam([params, {'params': self.awl.parameters(), 'weight_decay': 0}])
        else:
            self.optimizer = t.optim.SGD([params, {'params': self.awl.parameters(), 'weight_decay': 0}], momentum=0.9)
        return self.optimizer

this is my loss function (self.awl = AutomaticWeightedLoss(2))

class AutomaticWeightedLoss(nn.Module):
    """automatically weighted multi-task loss
    Params:
        num: int,the number of loss
        x: multi-task loss
    Examples:
        loss1=1
        loss2=2
        awl = AutomaticWeightedLoss(2)
        loss_sum = awl(loss1, loss2)
    """
    def __init__(self, num=2):
        super(AutomaticWeightedLoss, self).__init__()
        params = torch.ones(num, requires_grad=True)
        self.params = torch.nn.Parameter(params) 

    def forward(self, *x):
        loss_sum = 0
        for i, loss in enumerate(x):
            loss_sum += 0.5 / (self.params[i] ** 2) * loss + torch.log(1 + self.params[i] ** 2)

        return loss_sum

How can I solve this problem?

Hi Tianle!

Your problem is that you passing your optimizer a list that consists
of a list and a dict. Quoting from the torch.optim documentation:

Per-parameter options

Optimizer s also support specifying per-parameter options. To do this, instead of passing an iterable of Variable s, pass in an iterable of dict s. Each of them will define a separate parameter group, and should contain a params key, containing a list of parameters belonging to it.

(Note, this documentation should be updated to reflect that Variable
is deprecated; the term Parameter should be used instead.)

That is, you can pass in a list of Parameters or a list of dicts
(that contain lists of Parameters), but you can’t pass in a list
of lists.

I think this would work for you:

        print(type(params))   # at this point params is a list of dicts
        if opt.use_adam:
            params += [{'params': self.awl.parameters(), 'weight_decay': 0}]
            self.optimizer = t.optim.Adam (params)   # still a list of dicts

Best.

K. Frank

Hi KFrank!
Thank you very much for your reply,I tried what you said

        if opt.use_adam:
            params += [{'params': self.awl.parameters(), 'weight_decay': 0}]
            self.optimizer = t.optim.Adam(params)
        else:
            params += [{'params': self.awl.parameters(), 'weight_decay': 0}]
            self.optimizer = t.optim.SGD(params, momentum=0.9)

But the error happened :

Traceback (most recent call last):
  File "train.py", line 138, in <module>
    fire.Fire()
  File "D:\Anaconda3\lib\site-packages\fire\core.py", line 141, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
  File "D:\Anaconda3\lib\site-packages\fire\core.py", line 466, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
  File "D:\Anaconda3\lib\site-packages\fire\core.py", line 681, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
  File "train.py", line 73, in train
    trainer = FasterRCNNTrainer(faster_rcnn).cuda()
  File "D:\python\monoSAR_rcnn\trainer.py", line 63, in __init__
    self.optimizer = self.faster_rcnn.get_optimizer()
  File "D:\python\monoSAR_rcnn\model\faster_rcnn.py", line 310, in get_optimizer
    self.optimizer = t.optim.SGD(params, momentum=0.9)
  File "D:\Anaconda3\lib\site-packages\torch\optim\sgd.py", line 68, in __init__
    super(SGD, self).__init__(params, defaults)
  File "D:\Anaconda3\lib\site-packages\torch\optim\optimizer.py", line 52, in __init__
    self.add_param_group(param_group)
  File "D:\Anaconda3\lib\site-packages\torch\optim\optimizer.py", line 237, in add_param_group
    raise ValueError("parameter group didn't specify a value of required optimization parameter " +
ValueError: parameter group didn't specify a value of required optimization parameter lr

Then I tried to print params, here’s what print in the last part of this dict:

 {'params': [Parameter containing:
tensor([1., 1.], requires_grad=True)], 'lr': 0.001, 'weight_decay': 0.0005}, 
(this is loss.params){'params': <generator object Module.parameters at 0x00000226C29AEC10>, 'weight_decay': 0}]

So I wonder if this is the problem with loss.params?

Best.

tianle

Hi Tianle!

Do what the error message says: Include a value for lr when you
append self.awl.parameters() to your params list of parameter
groups (analogously to how you initially build params from
self.named_parameters()).

Best.

K. Frank

Hi KFrank!
Thank you very much for your reply,I tried what you said

params += [{'params': self.awl.parameters(), 'lr': lr, 'weight_decay': 0}]

But the error happened :

ValueError: some parameters appear in more than one parameter group

Then I tried to print params, here’s what print in the last part of this dict:

{'params': [Parameter containing:
tensor([1., 1.], requires_grad=True)], 'lr': 0.001, 'weight_decay': 0.0005},
 {'params': <generator object Module.parameters at 0x000001F771AEFBA0>, 'lr': 0.001, 'weight_decay': 0}]

That means I get an error if I add, and an error if I don’t :joy:
But if I don’t add loss.params the code will be fine…

Best.

tianle

Hi Tianle!

What do you think this error message might be telling you?

If you were debugging somebody else’s code, what issues might
this error message suggest you take a look at?

This entry in your params list is a dictionary that contains a Parameter.
Does this make sense?

But this entry in your params list is a dictionary that contains
something called a “generator object.” Is this okay? Is it legitimate
that your params list contains two different kinds of things? (And
what is a “generator object?”)

What could you do to explore further what might be going on here?

Best.

K. Frank