Basic: why default method of nn.Linear is forward?

Pretty new to python and pytorch. Super basic question…

Per code below

class Perceptron(nn.Module):
  def __init__(self,input_dim):
    super(Perceptron,self).__init__()
    self.fc1 = nn.Linear(input_dim,1)

  def forward(self, x_in):
    return torch.sigmoid(self.fc1(x_in)).squeeze()

p1 = Perceptron(3)
x = torch.randn((5,3))
torch.equal(p1.fc1.forward(x),p1.fc1(x))

result is “True”.

Looked at the source code and couldn’t figure out why

  1. I can run p1.fc1(x) directly
  2. why it defaults to run forward()

Thank you!!!

Among others hook, call also calls forward. When you can directly call an object, you are triggering its magic method call

2 Likes

Hi Bram,

Thanks for your reply! Could you elaborate a bit? I don’t see any hooks in the source code -> https://pytorch.org/docs/stable/_modules/torch/nn/modules/linear.html#Linear

When defining a model/layer (or even a loss function, activation) in PyTorch (Perceptron in your example), these models/layers should always subclass nn.Module and the forward method is sort of a special method in that case. Basically, it represents the forward pass of the input data. The reason why you should always do model(some_input) and not model.forward(some_that) is that the first one triggers registered hooks and the second one doesn’t. There are 3 types of hooks: pre-forward hooks, forward hooks and backward hooks. E.g. you can assign a backward hook to the layers to print their gradients during the backward call. You can read more about the hooks (how to register them, …) in the docs. Even if you don’t use them in the code, calling forward implicitly via __call__ is the preffered way to do it.

Btw, this is the right source code you should check.

1 Like

Ahh… now i see. Thank you!