How to visualize model in Pytorch

Just like we have plot_model in keras, is there some way in pytorch by which model can be visualized?
I tried make_dot using:

batch = next(iter(dataloader_train))
yhat = model(batch.text) # Give dummy batch to forward().
from torchviz import make_dot

make_dot(yhat, params=dict(list(model.named_parameters()))).render("rnn_torchviz", format="png")

but this gives output that batch has no attribute ‘text’

replace yhat with model(batch.text) in the make_dot call?

From what I remember make_dot takes a model instance with an input and returns an object which you can render into a .pdf.

The error you get says that the variable batch has no attribute text. Try printing batch.text and see what it returns?

@AlphaBetaGamma96 Thanks!
But this is what I get: (THE ERROR AFTER replacing)

So, the error is due to the fact that the variable batch which is a list has no attribute text. Shouldn’t batch be of type Tensor(if it’s being passed through a model?) Do the elements within the list have .text as an attribute?

@AlphaBetaGamma96 This is what batch is:
batch = next(iter(train_loader_1))

I am new to pytorch…so am unable to fully understand!Any help will be appreciated!

Yes, but it’s returning a variable of type list. Does your model take a list as input or a Tensor? And regardless of what type it should be, batch doesn’t have text as an attribute. (That’s what’s causing the error “list object has no attribute text”). Do the elements within batch have the attribute .text?

The input is a tensor
Also batch doesn’t have text attribute

Still what else i can do/replace this code with to plot my model…just as we do in keras (plot-model) …is there some easy way!!:slight_smile:

Can you print what is returned by print(type(batch))? I’m pretty sure batch is a list because your error states that a list object doesn’t have the attribute text and you call batch.text which would call the text attribute of batch… which doesn’t exist

This is the source of your error. Do the elements within the list have text as an attribute?

print(type(batch)) gives O/p as:

<class ‘list’>

No ,elements within the list do not have text as an attribute

So, if you were trying to do inference with your model. i.e. output = m1(input). What exactly would your input variable be? If it’s just batch try removing the .text from batch.text and see if that fixes your issue.

When I print batch , I get:

[tensor([[[-0.0579, 0.0439, 0.0658, …, -0.0565, 0.0413, -0.0023]],

     [[ 0.9421,  0.8119,  0.6808,  ...,  0.0039,  0.0252,  0.0431]],

     [[ 0.7007,  0.5435,  0.2290,  ..., -0.1226,  0.0346, -0.0850]],


     [[ 0.9834,  0.4084,  0.5638,  ...,  0.0261, -0.0626,  0.0173]],

     [[ 0.8450,  1.0068,  0.9290,  ..., -0.0277,  0.0607,  0.0565]],

     [[ 1.0653,  0.6558,  0.4438,  ...,  0.0670,  0.1114, -0.0131]]]),

tensor([2, 3, 1, 1, 2, 3, 2, 4, 3, 1, 0, 2, 3, 0, 1, 2, 4, 2, 1, 2, 4, 1, 2, 0, 3, 1, 4, 2, 1, 4, 2, 3])]

Also on removing .text part:

from torchviz import make_dot

make_dot(m1(batch), params=dict(list(m1.named_parameters()))).render(“cnn_torchviz”, format=“png”)


TypeError Traceback (most recent call last)
1 from torchviz import make_dot
----> 3 make_dot(m1(batch), params=dict(list(m1.named_parameters()))).render(“cnn_torchviz”, format=“png”)

/opt/conda/lib/python3.7/site-packages/torch/nn/modules/ in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
→ 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),

in forward(self, input)
36 #input = input.unsqueeze(1)
37 #input = input.unsqueeze(0)
—> 38 x = self.conv1(input)
39 x = self.conv2(x)
40 x = self.conv3(x)

/opt/conda/lib/python3.7/site-packages/torch/nn/modules/ in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
→ 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),

in forward(self, input)
52 #print(“INPUT SHAPE SKIP”)
53 #print(input.shape)
—> 54 conv1 = self.conv_1(input)
55 x = self.normalization_1(conv1)
56 x = self.swish_1(x)

/opt/conda/lib/python3.7/site-packages/torch/nn/modules/ in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
→ 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),

/opt/conda/lib/python3.7/site-packages/torch/nn/modules/ in forward(self, input)
257 _single(0), self.dilation, self.groups)
258 return F.conv1d(input, self.weight, self.bias, self.stride,
→ 259 self.padding, self.dilation, self.groups)

TypeError: conv1d(): argument ‘input’ (position 1) must be Tensor, not list

Ok, it seems that batch is a list of 2 tensors. I assume the first tensor is your input and the 2nd tensor is your target/label?

Try passing batch[0] as your input! That might work! :slight_smile:

Like this

make_dot(m1(batch[0]), params=dict(list(m1.named_parameters()))).render(“cnn_torchviz”, format=“png”)

I get this error:

However when i remove the render portion,it works fine!!Could u plz help how to do the render operation to save this large image :sweat_smile: :sweat_smile:

perhaps just try .render("cnn_torchviz.png")? format might be an argument from an older version of make_dot?

Thank you so much @AlphaBetaGamma96
Amazing guidance and support thoroughly!!
I have another query REGARDING MODEL SAVING AND LOADING IN pytorch and I hope you can also help me with that also…Shall I ask here itsef or raise a new topic?

Not a problem @hs99! I’d suggest reading the tutorial first Saving and Loading Models — PyTorch Tutorials 1.8.1+cu102 documentation and if there are still problems, raise a new topic!