Yes, but remember to call optimizer.zero_grad()before setting param (otherwise you will zero-out the value you set), and wrap
setting the params in a no_grad() block:
optimizer.zero_grad()
with torch.no_grad():
for param in model.parameters():
param.grad = 3.14
### do some additional compute, e.g.
loss = my_loss_function (input, target)
loss.backward()
optimizer.step()