Evaluate expression without adding it to the backward graph

Hi all, I am curious how can I evaluate an expression, but don’t include it in the computational graph?

For example, in this code, I need the network to produce Q values before I feed them back for training.

However, these evaluations seem to be affecting the gradients. Right now I am solving the problem by detaching Q_targets, but this solution feels messy.

What’s the assumed way of controlling the dynamic graph?

1 Like

The basic idea is that operations on Variable are equivalent to operations on tensor but build also the dynamic graph.
If you don’t want the dynamic graph (no gradient will flow back), just work with tensors directly.

If this operation is inside an nn.Module that only accepts Variable, you can forward a Variable with the flag volatile=True (which is going to make this forward pass more efficient because you won’t backpropagate through it) and then either detach the output Variable to get a new one that you can use independently of what was done before or get the tensor corresponding to the output of your network (with .data) and use it for other computations.

@albanD thank you, this made everything clear!