Does Variable assignment preserve the graph?

I have code that looks like the following:

import torch
import torch.autograd as grad

# A is a Variable that has undergone previous operations and has an existing graph

dummy = grad.Variable(torch.zeros(6, 5, 4, 3))
for i in xrange(6):

	# Do something here...
    indices = # Determine indices somehow

    dummy[i] = torch.index_select(A[i], 1, indices.view(-1)).view(5, -1, 3)

Two questions:

  1. From this answer, it seems that index_select will preserve the state as long as you are indexing from a Variable with some existing state. So am I right to presume that dummy will have a graph that is composed of the graphs of the selected indices?
  2. Does dummy need to have requires_grad=True, or will assigning to it a Variable with requires_grad=True automatically cause that flag to “flip”?
  1. Yes
  2. The indexed assignment will cause it to have requires_grad=True because the result of index_select will have requires_grad=True
1 Like

Final question (referencing my other post): how can I print the graph to verify this? In other words, in debugging, how can I check that I’m not losing track of any operations along the way?

There are a bunch of scripts to visualize the graph floating around the internet, but I don’t think it’s part of core PyTorch.

You can also just inspect the grad_fn attribute and grad_fn.next_functions to manually verify.