dear all,
I am a rookie with pytorch.
I am trying to add a parameter, something like a scalar, to a neural network with I previously defined and added to the optimizer
optimizer = torch.optim.Adam(model_g.parameters(), lr=learning_rate)
a = torch.tensor([1.0]).type(Tensor)
optimizer.add_param_group({'params': a})
then I do:
# Forward pass
g = model_g(input)
# Compute the loss
loss = loss_fn(a, g)
# Finally
optimizer.zero_grad()
loss.backward()
optimizer.step()
āgā changes accordingly during the loop but āaā does not (it would if āaā were a neural network). Is there any pretty and clean way to optimize āaā as well.
Thanks a lot