I know I could use the named_parameters() to do that.
However, when I write a simple test, I face a bug.
import torch
import torch.nn as nn
import torch.optim as optim
if __name__ == '__main__':
module = nn.Sequential(nn.Linear(2,3),nn.Linear(3,2))
params_dict = dict(module.named_parameters())
params = []
for key, value in params_dict.items():
if key[-4:] == 'bias':
params += [{'params':value,'lr':0.0}]
else:
params += [{'params':value,'lr':0.1}]
op = optim.SGD(params, momentum=0.9)
The error information:
Traceback (most recent call last):
File "test_lr.py", line 15, in <module>
op = optim.SGD(params, momentum=0.9)
File "/home/v-yawan1/anaconda2/lib/python2.7/site-packages/torch/optim/sgd.py", line 56, in __init__
super(SGD, self).__init__(params, defaults)
File "/home/v-yawan1/anaconda2/lib/python2.7/site-packages/torch/optim/optimizer.py", line 61, in __init__
raise ValueError("can't optimize a non-leaf Variable")
ValueError: can't optimize a non-leaf Variable