Adding scalar parameter

dear all,

I am a rookie with pytorch.

I am trying to add a parameter, something like a scalar, to a neural network with I previously defined and added to the optimizer

optimizer = torch.optim.Adam(model_g.parameters(), lr=learning_rate)

a = torch.tensor([1.0]).type(Tensor)
optimizer.add_param_group({'params': a})

then I do:

# Forward pass
g = model_g(input)

# Compute the loss
loss = loss_fn(a, g)

# Finally
optimizer.zero_grad()
loss.backward()    
optimizer.step()

ā€œgā€ changes accordingly during the loop but ā€œaā€ does not (it would if ā€œaā€ were a neural network). Is there any pretty and clean way to optimize ā€œaā€ as well.

Thanks a lot

Found error needs to add device="cuda"

a = torch.tensor([1.0], requires_grad=True, device="cuda")

1 Like