Updata my own parameters by using torch.optim.SGD?

I define my own learnable parameters, like this

w1 = torch.randn(D_in, H, device=device, dtype=dtype, requires_grad=True)

Now, after get the loss, I would like to optimize w1 by torch.optim.SGD. How can I do this?

You can just pass it in a list to the optimizer:

# Your parameter
w = torch.ones(1, requires_grad=True)
optimizer = torch.optim.SGD([w], lr=1.)

# Your forward and backward pass
(w * torch.ones(1)).backward()

# Update parameters
optimizer.step()
1 Like

Thank you for your reply!