Hey I’m initializing a trainable parameter and adding it to the optimizer like so:
lamb = nn.Parameter(torch.tensor(0.0, requires_grad=True, device=device, dtype=torch.float32))
params = [
{'params': net.parameters(), 'lr': 1e-3},
{'params': lamb, 'lr': 1e-3}
]
optimizer = Adam(params)
This parameter i wanna use as a trainable weight for some matrix-vector multiplications in the following, where only lamb
should be learned. That’s why i wrap lamb
into another tensor.
xi = torch.tensor([[lamb], [-1], [1]], requires_grad=True, device=device, dtype=torch.float32)
But if I use that for training my parameter lamb
doens’t get updated at all even though my optimizer is fully aware of this parameter. Am I missing something or is there a better way to do stuff like this?
Thanks in advance!