Now, if you follow loss in the backward direction, using it’s .creator attribute, you will see a graph of computations that looks like this:
input -> conv2d -> relu -> maxpool2d -> conv2d -> relu -> maxpool2d
-> view -> linear -> relu -> linear -> relu -> linear
-> MSELoss
-> loss
But its creator attribute only prints <torch.nn._functions.thnn.auto.MSELoss object at 0x7f784059d5c0>, I would like to know is there any convenient approach that can print its all computation graph such as print(net)?
from graphviz import Digraph
import torch
from torch.autograd import Variable
# make_dot was moved to https://github.com/szagoruyko/pytorchviz
from torchviz import make_dot
x1 = Variable(torch.Tensor([3]), requires_grad=True)
x2 = Variable(torch.Tensor([5]), requires_grad=True)
a = torch.mul(x1, x2)
y1 = torch.log(a)
y2 = torch.sin(x2)
w = torch.mul(y1, y2)
make_dot(w)
but it doesn’t work
Traceback (most recent call last):
File "graph.py", line 21, in <module>
make_dot(w)
File "~/.pyenv/versions/pymarl/lib/python3.6/site-packages/torchviz/dot.py", line 163, in make_dot
add_base_tensor(var)
File "~/.pyenv/versions/pymarl/lib/python3.6/site-packages/torchviz/dot.py", line 153, in add_base_tensor
if var._is_view():
AttributeError: 'Tensor' object has no attribute '_is_view'
UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /pytorch/c10/core/TensorImpl.h:1156.)
x1.names=('x1',)
Traceback (most recent call last):
File "graph.py", line 13, in <module>
a = torch.mul(x1, x2)
RuntimeError: Error when attempting to broadcast dims ['x1'] and dims ['x2']: dim 'x1' and dim 'x2' are at the same position from the right but do not match.
Is it also possible to plot the variable name in circle node?
Variables are deprecated since PyTorch 0.4, too, so you should remove their usage.
The mul operation seems to fail after adding the names. I’m not familiar with the current support of Named tensors, but unless this usage is wrong, it looks like a valid bug. Would you mind creating an issue on GitHub for it, please?
The torch.Tensor usage is deprecated, so I’ve used this as a check (which creates tensors with a single value):
x1 = torch.tensor([3.], requires_grad=True)
x2 = torch.tensor([5.], requires_grad=True)
a = torch.mul(x1, x2)
print(a) # works
x1 = torch.tensor([3.], requires_grad=True)
x2 = torch.tensor([5.], requires_grad=True)
x1.names=('x1',)
x2.names=('x2',)
a = torch.mul(x1, x2)
> RuntimeError: Error when attempting to broadcast dims ['x1'] and dims ['x2']: dim 'x1' and dim 'x2' are at the same position from the right but do not match.
You should either replace it with torch.tensor (lowercase t) if you are passing values to it directly or use the factory methods such as torch.randn, torch.zeros, torch.empty.