The optimizers should still work, as they are just using the .grad
attribute of each passed parameter:
x = torch.zeros(1)
optimizer = torch.optim.SGD([x], lr=1.)
x.grad = torch.tensor([10.])
optimizer.step()
print(x)
> tensor([-10])
The optimizers should still work, as they are just using the .grad
attribute of each passed parameter:
x = torch.zeros(1)
optimizer = torch.optim.SGD([x], lr=1.)
x.grad = torch.tensor([10.])
optimizer.step()
print(x)
> tensor([-10])