Equivalent of "apply_gradients" and "compute_gradients" from Tensorflow in pytorch

I’m looking for a way to create the same functionality of the “apply_gradients” and “compute_gradients” functions in Tensorflow using PyTorch autograd.grad() or backward() functions. Any idea how one could achieve that?
In Tensorflow, we use these two functions like this:

grad1 = tf.gradient(loss1, model.variables) # calculates the gradient for loss1
grad2 = tf.gradient(loss2, model.variables) # calculates the gradient for loss2
## maybe do some modifications to the grad, combine grads from various losses and then pass the final grad to the apply_gradients function.
combined_grads = modify(grad1, grad2)
optimizer.apply_gradients(zip(combined_grads, model.variables), global_step=step_counter)