Equivalent of `tensorflow.apply_gradients`

Imagine I have gradients from a previous call to torch.autograd.grad. Is there a way to use these gradients in a call to optimizer.step() the same way that if I have gradients in tensorflow, I can feed them to tf.apply_gradients(gradients, variables)? Computing gradients again through .backward() is causing my program to be unacceptably slow.