How to print the computational graph of a Variable?

Neural Networks — PyTorch Tutorials 2.1.1+cu121 documentation says:

Now, if you follow loss in the backward direction, using it’s .creator attribute, you will see a graph of computations that looks like this:

input -> conv2d -> relu -> maxpool2d -> conv2d -> relu -> maxpool2d
      -> view -> linear -> relu -> linear -> relu -> linear
      -> MSELoss
      -> loss

But its creator attribute only prints <torch.nn._functions.thnn.auto.MSELoss object at 0x7f784059d5c0>, I would like to know is there any convenient approach that can print its all computation graph such as print(net)?

2 Likes

Hi,

You can use this script to create a graph https://github.com/szagoruyko/functional-zoo/blob/master/visualize.py

To open it with a regular pdf viewer you can do make_dot(your_var).view().

3 Likes

I want to generate a following computation graph mentioned in How Computational Graphs are Constructed in PyTorch | PyTorch

I try with make_dot with following code

from graphviz import Digraph
import torch
from torch.autograd import Variable

# make_dot was moved to https://github.com/szagoruyko/pytorchviz
from torchviz import make_dot

x1 = Variable(torch.Tensor([3]), requires_grad=True)
x2 = Variable(torch.Tensor([5]), requires_grad=True)

a = torch.mul(x1, x2)
y1 = torch.log(a)
y2 = torch.sin(x2)
w = torch.mul(y1, y2)

make_dot(w)

but it doesn’t work

Traceback (most recent call last):
  File "graph.py", line 21, in <module>
    make_dot(w)
  File "~/.pyenv/versions/pymarl/lib/python3.6/site-packages/torchviz/dot.py", line 163, in make_dot
    add_base_tensor(var)
  File "~/.pyenv/versions/pymarl/lib/python3.6/site-packages/torchviz/dot.py", line 153, in add_base_tensor
    if var._is_view():
AttributeError: 'Tensor' object has no attribute '_is_view'
1 Like

Which PyTorch version are you using? The _is_view method is quite old, if I’m not mistaken.

Thank you, the original pytorch version is 0.4.1 and upgrading it to 1.9.0a it outputs the following pic

image

I notice that it only plots grad_fn of tensor in square node but no variable name in circle node. I tried to use named tensor like

x1 = Variable(torch.Tensor([3]), requires_grad=True)
x2 = Variable(torch.Tensor([5]), requires_grad=True)
x1.names=('x1',)
x2.names=('x2',)

a = torch.mul(x1, x2)
a.names=('a',)

y1 = torch.log(a)
y1.names=('y1',)

y2 = torch.sin(x2)
y2.names=('y2',)

w = torch.mul(y1, y2)
w.names=('w',)

This time the compiler outputs

UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at  /pytorch/c10/core/TensorImpl.h:1156.)
  x1.names=('x1',)
Traceback (most recent call last):
  File "graph.py", line 13, in <module>
    a = torch.mul(x1, x2)
RuntimeError: Error when attempting to broadcast dims ['x1'] and dims ['x2']: dim 'x1' and dim 'x2' are at the same position from the right but do not match.

Is it also possible to plot the variable name in circle node?

1 Like

Variables are deprecated since PyTorch 0.4, too, so you should remove their usage.
The mul operation seems to fail after adding the names. I’m not familiar with the current support of Named tensors, but unless this usage is wrong, it looks like a valid bug. Would you mind creating an issue on GitHub for it, please?

1 Like

Doesn’t it failed because you’re trying to mul 1 tensor of size 3 with another of size 5? they need to be the same shape in order for mul to work?

The torch.Tensor usage is deprecated, so I’ve used this as a check (which creates tensors with a single value):

x1 = torch.tensor([3.], requires_grad=True)
x2 = torch.tensor([5.], requires_grad=True)
a = torch.mul(x1, x2)
print(a) # works

x1 = torch.tensor([3.], requires_grad=True)
x2 = torch.tensor([5.], requires_grad=True)
x1.names=('x1',)
x2.names=('x2',)
a = torch.mul(x1, x2)
> RuntimeError: Error when attempting to broadcast dims ['x1'] and dims ['x2']: dim 'x1' and dim 'x2' are at the same position from the right but do not match.

Issue is created at Multiply two named tensor causes RuntimeError · Issue #67168 · pytorch/pytorch · GitHub.

No, their size are both torch.Size([1]).

1 Like

What should we replaced with torch.Tensor if it’s deprecated?

and, @Ynjxsjmh you’re correct ignore my mistake!

You should either replace it with torch.tensor (lowercase t) if you are passing values to it directly or use the factory methods such as torch.randn, torch.zeros, torch.empty.

2 Likes