Traversing computation graph and seeing saved tensors PROBLEM [Edited: hopefuly clearer]

Edit: tried to make the question hopefully clearer.

I need to traverse a computation graph in order to plot a diagram of it.

What I currently do is:
I start from my dummy loss scalar, and recursively go into .next_functions

This allows me to “visit” operations and also parameters.
HOWEVER, i don’t manage to visit saved tensors (for example, activations that are saved for the backward pass).

I can see “old” pytorch code that assumes the presence of .saved_tensors, but I don’t encounter this attribute when “traveling” on the graph.

is .saved_tensors still accesible? if not, any suggestion on how to do it ?

I’m looking at the following reference:

snippet here - full link at the bottom of the post.

def add_nodes(var):
        if var not in seen:
            if torch.is_tensor(var):
                dot.node(str(id(var)), size_to_str(var.size()), fillcolor='orange')
            elif hasattr(var, 'variable'):
                u = var.variable
                name = param_map[id(u)] if params is not None else ''
                node_name = '%s\n %s' % (name, size_to_str(u.size()))
                dot.node(str(id(var)), node_name, fillcolor='lightblue')
            else:
                dot.node(str(id(var)), str(type(var).__name__))
            seen.add(var)
            if hasattr(var, 'next_functions'):
                for u in var.next_functions:
                    if u[0] is not None:
                        dot.edge(str(id(u[0])), str(id(var)))
                        add_nodes(u[0])
            if hasattr(var, 'saved_tensors'):
                for t in var.saved_tensors:
                    dot.edge(str(id(t)), str(id(var)))
                    add_nodes(t)
    add_nodes(var.grad_fn)

note - the “orange nodes” seem to be designed with the intention to display what I want, but they don’t, since .saved_tensors isn’t found.

2 Likes

bump!

no one?

Maybe the bleeding edge version is different regarding this? I’m using version 0.4,
is there any chance that .saved_tensors is available there when traversing the computation graph?

I guess the bad news is there isn’t.
It’s still there for python functions (and those were much more common in the old days), but they aren’t exposed for the ATen specified functions.
The longer story is this, take a compiled torch master tree to follow along:

  • Let’s take torch.mm as an example.
  • torch/csrc/autograd/generated/ has the autogenerated Autograd functions.
  • In python_functions.cpp, you can see the *Backward Python classes being created without much functionality (addClass is at the top, and then it is used addClass below.
  • The wrapped C-class is MmBackward from Functions.h. There you can see that self_ and mat2_ are members that you’d be interested in.
  • But back in python_functions.cpp, addClass doesn’t do it (and it would need to get a list of saved variables to expose, could be tricky) - that would be easier if Functions.cpp had a list or something.

Best regards

Thomas

1 Like

Thanks @tom,
I really appreciate the detailed answer!

I believe that this is important to support, to allow plotting aiding in having a full mental image of what’s going on in the network that you are training (including parameters, buffers and activations).

Depending on ONNX or JIT is problematic because I don’t think it can support all of the exciting custom stuff that is so easy to do in pytorch (correct me if I’m wrong).

I would love to add the implementation, but it sounds like a fairly delicate topic to jump into, without having any experience in autograd CPP code.
However, with some guidance I can give it a try :slight_smile:

Any other suggestions on other approaches to plot a graph that contains operators, parameters, buffers and activations ?
Maybe I should give the JIT tracing more chance first ?

actually, another question @tom

Was it supported in the past and the support was removed during recent changes?

I’m asking because you can see here (in cell 5) that accessing .saved_tensors from python while traversing used to work. The “orange blocks” were created that way.

I’d probably try the JIT (but I don’t have a whole lot of experience with it). As far as I understand, the goal is to support most reasonable things.
These might have been previously supported, originally, a lot more autograd stuff was going on in Python (and the genuine Python autograd.Function derivatives still have it).
If the JIT doesn’t cut the mustard for your use-case you look into whether support is desirable. The Contributing document rightly emphasizes the importance of discussing the feature with the core-devs first and then start the implementation.

Best regards

Thomas

1 Like

@yoelshoshan , did you figure out how to access / find saved_tensors using JIT tracing?

I would also find it super helpful to have access to them.