How to train a model if the forward function has many arguments?

I have a very basic model and i want to to print the loss and backward pass it. How can I use the forward function if it has more arguments than one?

Thank you

class Net(nn.Module):


def __init__(self,inp,hid,out):
    super(Net,self).__init__()
    self.layers = nn.ModuleDict(
            {'lin1': nn.Linear(inp,hid),
             'lin2': nn.Linear(hid,out)})
    
    self.activations = nn.ModuleDict(
            [['act1', nn.Sigmoid()],
             ['act2', nn.Sigmoid()]])


def forward(self, x, lay, activs):
    x = self.layers[lay](x)
    x = self.activations[activs](x)
    return x

x = torch.randn(5, 4)
y = torch.randn(5, 3)

model = Net(4,3,3)
optimizer = torch.optim.SGD(model.parameters(),lr=0.0001)
loss_fn = nn.MSELoss(reduction='sum')

for t in range(500):
   y_pred = model(x)
   loss = loss_fn(y_pred, y)
   print(t, loss.item())
   optimizer.zero_grad()
   loss.backward()
   optimizer.step()

Just do model(input1,input2…) model(input) will takes as many arguments as the forward function

I tried that, but since I declared the layers and activations in the class, what should I use as arguments?

They way you coded it is a bit strange, people usually code it in order. If you want to do it that way then you should write something which goes through the layers/activations in the order that you want with some loops inside the forward function. Thinking in the typical structure of a neural network, in which you apply layers and then activations, you can iterate in a zip(layer_keys,activation_keys) to go in order doing layer1 act1.

As you can imagine then, the lay input and activs inputs are not necessary unless you want to use those layers depending on something special.