Different learning rates for different parameters

Hello! How can I specify a different learning rate for each parameter of my model. In my case I have something of the form:

x_index = torch.tensor([14.,15.,8.,9.,10.,11.,13.,12.,5.,4.,3.,2.,0.,1.])
wavenumber_vals = torch.tensor([284829.132,284829.132,284840.730,284840.730,284854.732,284854.132,284864.420,284864.920,284884.170,284884.620,284898.434,284898.814,284910.218,284910.218],dtype=torch.float64)

params = torch.tensor([B_0, D_0, gamma_0, b1_0, c1_0, b2_0,eqQ_0],dtype=torch.float64, requires_grad=True)

opt = optim.Adam([params], lr=1e-3)
for i in range(10):
  loss = my_loss(func(x_index,params),wavenumber_vals)

I would like to have a different learning rate for each of the 7 parameters of the model. Thank you!


I tried to do it manually:

learn_rate = torch.tensor([1e-5,1e-10,1e-3,1e-3,1e-3,1e-3,1e-3], dtype=torch.float64)
for i in range(5):
    loss = my_loss(func(x_index,params),wavenumber_vals)
    params = params - learn_rate*params.grad

but after the first pass in the for loop I am getting this error:

UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/build/aten/src/ATen/core/TensorBody.h:494.)
  params = params - learn_rate*params.grad
Traceback (most recent call last):
TypeError: unsupported operand type(s) for *: 'Tensor' and 'NoneType'

which seems to be related to the fact that after the first pass in the for loop params is not a leaf parameter anymore. How can I fix that?

You could pass each parameter separately and specify the learning rate per-parameter as seen here.