How to set gradients manually and update weights using that?

1.Are you 100% sure that there is no difference in param after the step?, maybe you use an adaptive optimizer and the learning rate is almost zero?
2.Are you 100% sure that the optimizer is responsible for this param update?

I wrote this code to test this: (I am using v0.4.0):

import torch

target = torch.randn(1, 10)
input = torch.randn(1, 20)
network = torch.nn.Linear(20, 10)
optimizer = torch.optim.SGD(network.parameters(), lr=0.01)
optimizer.zero_grad()
loss = ((network(input)-target)**2).sum()
loss.backward()
print(network.weight[0, 0])
optimizer.step()
print(network.weight[0, 0]) #works(it is changed)
network.weight.grad[0, 0] = 100
optimizer.step()
print(n.weight[0, 0]) #works(it is changed)
network.weight.grad = torch.randn(10, 20)
optimizer.step()
print(n.weight[0, 0]) #works(it is changed)
6 Likes