How to avoid recalculating a function when we need to backpropagate through it twice?

Thank you, but I did not mention any error. I wanted to avoid calculating the function f twice. I think that in your solution, it is calculated twice, right? Unless PyTorch automatically caches the first calculation of f and then in the second invocation the cached graph is reused somehow, including the detached subgraph composed of the ancestors of x?