How to access `graph` in Variable?

When I ran the following code, I got an error message:

import torch
from torch.autograd import Variable

tensor = torch.FloatTensor([[1,2],[3,4]])
variable_false = Variable(tensor) # can't compute gradients
variable_true = Variable(tensor, requires_grad=True)

# tensor operations
t_out = torch.mean(tensor*tensor)
# variable operations
v_out_false = torch.mean(variable_false*variable_false)
v_out_true = torch.mean(variable_true*variable_true)

# backpropagation
v_out_false.backward()

RuntimeError: there are no graph nodes that require computing gradients

I know I should added requires_grad=True, but here my questions are:

  1. the graph is computational graph for calculating gradients, right?
  2. how can such graph and its nodes be accessed?

Thanks!

  1. The mentioned graph is the graph that contains all the computation that were used to get to the final Variable. This is the graph that is used to determine which gradients should be computed with backprop.
  2. You can see here an example of how to traverse this graph for visualization purposes: https://github.com/szagoruyko/functional-zoo/blob/master/visualize.py
1 Like

Your v_out_false variable isn’t connected to any Variables which need gradients. If you did v_out_true.backward() it would work.

  1. Yes
  2. The graph is constructed implicitely through Variable.grad_fn and Function.next_functions. grad_fn. In the current release I believe this is still called creator but it has been changed in master. Checkout this pull request. Unfortunately I don’t believe it’s possible to reconstruct the forward graph without going through some hoops, but as @albanD mentioned above you can reconstruct the backward graph.
1 Like