Trouble with Layer Architecture

Hi all,

I am a noob with pytorch, but I have been making neural network models in JMP for a while now.

I am trying to match the structure of JMP models in pytorch which have 10 input parameters, a single
tanh layer with 10 nodes, feeding a single continuous prediction value.

The final equation ends up being:

predicted_value = b + weight_n1 * [ tanh(0.5*(weight_p1parameter1 + weight_p2parameter2…)]
+ weight_n2 * [ tanh(0.5*(weight_p11* parameter1 + weight_p12*parameter2…)]
+ …

…and so on for 10 total nodes. I have tried to use:

def __init__(self, in_features, out_features):
    super().__init__()
    self.in_features = in_features
    self.out_features = out_features
    self.input_layer = torch.nn.Linear(self.in_features,10)
    self.hidden_layer = torch.nn.Linear(10, self.out_features)

def forward(self, x):
    out = self.input_layer(x)
    out = self.hidden_layer(torch.tanh(0.5*out))    
    
    return out

How do I set up the forward function for this? Any time I add any nonlinear activation function I get a single predicted value.

Thanks in advance!

The nonlinearity should not change the number of outputs. In your current code snippet the number of output values would be determined by self.out_features.
Let me know, if I misunderstood the issue.