Can we set param.grad instead of calling backward, THEN call backward to add additional grads on top?

As the question is asked, e.g.:

for param in model.parameters():
        param.grad = 3.14

### do some additional compute

loss.backward()

optimizer.step()

Will this add the new grads from backward() on top of the manually set ones of 3.14? Or will it override the manually set ones?

Hi Sam!

Yes, but remember to call optimizer.zero_grad() before setting
param (otherwise you will zero-out the value you set), and wrap
setting the params in a no_grad() block:

optimizer.zero_grad()

with torch.no_grad():
    for param in model.parameters():
        param.grad = 3.14

### do some additional compute, e.g.
loss = my_loss_function (input, target)

loss.backward()

optimizer.step()

Best.

K. Frank

1 Like