When defining a model/layer (or even a loss function, activation) in PyTorch (Perceptron in your example), these models/layers should always subclass nn.Module and the forward method is sort of a special method in that case. Basically, it represents the forward pass of the input data. The reason why you should always do model(some_input) and not model.forward(some_that) is that the first one triggers registered hooks and the second one doesn’t. There are 3 types of hooks: pre-forward hooks, forward hooks and backward hooks. E.g. you can assign a backward hook to the layers to print their gradients during the backward call. You can read more about the hooks (how to register them, …) in the docs. Even if you don’t use them in the code, calling forward implicitly via __call__ is the preffered way to do it.
Btw, this is the right source code you should check.