Hi,
I have a simple question. For example, I usually see the pytorch code like this:
class TwoLayerNet(torch.nn.Module):
def __init__(self, D_in, H, D_out):
"""
In the constructor we instantiate two nn.Linear modules and assign them as
member variables.
"""
super(TwoLayerNet, self).__init__()
self.linear1 = torch.nn.Linear(D_in, H)
self.linear2 = torch.nn.Linear(H, D_out)
def forward(self, x):
"""
In the forward function we accept a Variable of input data and we must return
a Variable of output data. We can use Modules defined in the constructor as
well as arbitrary operators on Variables.
"""
h_relu = self.linear1(x).clamp(min=0)
y_pred = self.linear2(h_relu)
return y_pred
model = TwoLayerNet(D_in, H, D_out)
y_pred = model(x)
Here, why does not write model.forward(x) to predict y?
one of the ideas with torch.nn.Module is that you do the expressive stuff by defining forward and the Module class that you inherit from provides the “paperwork” (e.g. calling hooks) in the model.__call__(...) method (which is what model(x) will call by python special name specifications).
if you are curious you can look at what model(x) does behind the scenes beyond calling model.forward(x) here: https://github.com/pytorch/pytorch/blob/master/torch/nn/modules/module.py#L205
This looks like an idiom that needs to be explicitely mentioned in the docs, it took me some time to find out and then I searched to find this answer and be sure about what I understood by guessing.