Listing variables in a computational graph

There are two neural networks. In theory, the loss which is computed at the end for both of them should rely on distinct sets of variables. But for some reason, I get a common error:

RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed).

Is it possible to check all the variables in the computational graph for the backpropagation (backward function)?
And Is it possible to see the variable where the graphs coincide?

What does your code look like? Do you backprop on a single, combined loss variable once or two individual ones each?

Basically, there are two reinforcement learning policy networks (but that shouldn’t matter). The losses are separate (I tried to feed them with different rewards, etc.). But I must have made a mistake somewhere and somewhere they coincide.