# Understand nn Module

Apologize for the naiveness of the question but I’ve been struggling for days trying to understand the Pytorch tutorial examples.

Specially the following codes are from this link: http://pytorch.org/tutorials/beginner/blitz/neural_networks_tutorial.html#sphx-glr-beginner-blitz-neural-networks-tutorial-py

My quesion is :

• What does “def forward” do? When and how is the function called?
• In the feedforward function, what does “x = x.view(-1, self.num_flat_features(x))” do?

Thanks! Also, I might be missing some pieces in programming so I have difficulties understanding the codes, and it would be helpful if you could kindly point it out!

[codes start from here]

``````import torch
import torch.nn as nn
import torch.nn.functional as F

class Net(nn.Module):

def __init__(self):
super(Net, self).__init__()
# 1 input image channel, 6 output channels, 5x5 square convolution
# kernel
self.conv1 = nn.Conv2d(1, 6, 5)
self.conv2 = nn.Conv2d(6, 16, 5)
# an affine operation: y = Wx + b
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)

def forward(self, x):
# Max pooling over a (2, 2) window
x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
# If the size is a square you can only specify a single number
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
x = x.view(-1, self.num_flat_features(x))
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x

def num_flat_features(self, x):
size = x.size()[1:]  # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features

net = Net()
print(net)``````
4 Likes

The `forward` function defines how to get the output of the neural net. In particular, it is called when you apply the neural net to an input `Variable`:

``````net = Net()
net(input)  # calls net.forward(input)
``````

The view function takes a Tensor and reshapes it.
In particular, here `x` is being resized to a matrix that is `-1` by `self.num_flat_features(x)`. The `-1` isn’t actually
`-1`, it autoinfers the amount necessary (see docs)
`x = x.view(-1, self.num_flat_features(x))`

What this is doing here exactly is changing `x` to a vector with `self.num_flat_features(x)` elements (the size of the first dimension is inferred to be 1.)

12 Likes

Hi Richard,

Apologize for the late reply. With more experience I now understand your explanations better. Thanks!

Yuqiong

What exactly are the flat features being defined in num_flat_features?

So the code goes like:

``````def num_flat_features(self, x):
size = x.size()[1:]  # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
``````

`x.size()[1:]` would return a tuple of all dimensions except the batch. e.g. if `x` is a 25x3x32x32 tensor (an image), then `size` would be `3x32x32` and thus `num_features` would be `3x32x32 = 3072`. So it’s the total number of pixels in that image. In other words `flat_features` are “flattened features”.

5 Likes

Looks like it flattens all dimensions except of the first one. It is similar to `torch.flatten(x, 1)`.