Gradient of Loss w.r.t two groups of parameters?

Hi There,

Please consider the following case:

aa = tensor.FloatTensor([10,10])
aa.requires_grad = True
y = net(x)
optimizer = optim.SGD(net.parameters(), lr=0.01, momentum=0.9)
loss = aa * creterion(y)
......

Suppose we want to minimize loss by alternating update aa and parameters in net(). How to implement with backward() and step() function?

Thank you so much!

something like:

aa.requires_grad = bool(epoch % 2)
if aa.requires_grad:
  with torch.no_grad():
    y = net(x)
else:
  y = net(x)

you may also want to assign aa to a distinct optimizer or parameter group, as this procedure only makes sense if you want to adjust training dynamics of “aa”