Freezing grad for some parameters

I have to losses, where it has L1 and L2 = f(L1). L1 is used for some parameters in neural networks, where L2 is used for another parameters in neural network. When I try calling backward twice, the gradient of one parameters is computed twice and the grad is not exact. Is there any methods to achieve this? Thank you.