RL expediting loss computation

I have an RL setup with both a value and a policy function. They each need to be updated in turn. Is it possible to do this update in one go without having to do something like this (spinningup/sac.py at master · openai/spinningup · GitHub) where the updates are computed for each with the other having requires_grad=True?