Forward method in PyTorch

I am following this tutorial and I am confused by the part where the forward method is defined. Specifically, from the tutorial, we have this class

class Mnist_Logistic(nn.Module):
    def __init__(self):
        super().__init__()
        self.lin = nn.Linear(784, 10)

    def forward(self, xb):
        return self.lin(xb)

. Then, when we do

model = Mnist_Logistic()
print(loss_func(model(xb), yb))

, model(xb) uses the forward method automatically. Why can we do that instead of doing something like print(loss_func(model.forward(xb),yb))?

Thank you!

The recommended way is to call the model directly, which will execute the __call__ method as seen in this line of code.
This makes sure that all hooks are properly registered and calls forward afterwards.

3 Likes

My PyTorch method isn’t automatically calling the forward method.

I’m trying to embed my graph adjacency matrix by aggregating neighbours and combining them (similar to GraphSAGE)

An adjacency matrix is of size nXn and the embedding will be of size nXd where d<n.

So, basically in my code, an adjacency matrix of a graph is fed as input for the purpose of embedding and the forward method should return the embedding.


import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import numpy as np

class Setting(nn.Module):
    
    def __init__(self, A):
        
        super(Setting, self).__init__()
        
        self.A = A
        self.X = np.array(np.sum(A, axis=1))
        self.feature_len = 10

        self.L = 3
        self.n = 10
        self.z_dim = 8
        
        self.h = torch.empty(self.L-1,self.n, self.z_dim)
        
        self.h0 = torch.from_numpy(self.X)
        
        W0 = nn.init.xavier_uniform_(torch.empty(self.feature_len, self.z_dim))
        
        self.h[0] = torch.empty(self.n, self.z_dim)

        for v in range(self.n):
            self.h[0][v] = F.relu(self.h0[v]*W0)
        
    def forward(self, A):
        
        d_u = np.empty([self.n, self.n])
        
        for v in range(self.n):
            d_u[v] = self.X[v]*self.A[v]

        d_u = torch.from_numpy(d_u)        
                
        h_n = torch.empty(self.L-1, self.n, self.z_dim)

        rnn = nn.GRUCell(self.z_dim, self.z_dim)

        H = F.normalize(self.h[0], p=2, dim=1)
        
        for l in range(0,self.L-1):
            
            h_n[l] = torch.mm(d_u, self.h[l-1].double())
            self.h[l] = rnn(self.h[l-1], h_n[l]).detach().float()
            self.h[l] = F.normalize(self.h[l], p=2, dim=1)
                        
            H = torch.max(H, self.h[l])
   
        return H

Now based on the value of H, I want to train the weight W0, by taking MSELoss of H and H1 (another already known embedding).

net = Setting(A).double()
loss = nn.MSELoss()
optimizer = torch.optim.Adam(Setting.parameters(), lr=0.1)

for epoch in range(10):
    H = net(A)
    loss_calc = loss(H, H1)
    loss_calc.backward()
    optimizer.zero_grad()

print(H)

How to train the weight W0 based on this architecture? Any modifications in the code are highly appreciated.

Thanks in advance.

Double post from here. Let’s continue the discussion in the created topic please.

1 Like