How to properly implement this architecture?

At the moment my model gives me an error:
TypeError: tanh(): argument 'input' (position 1) must be Tensor, not tuple

If there is a solution?
How to implement this model in PyTorch?

The model is following:

class RNN(nn.Module):
    def __init__(self):
        super(RNN, self).__init__()
        self.lstm1 = nn.LSTM(input_size=87, hidden_size=256)
        self.lstm2 = nn.LSTM(input_size=256, hidden_size=128)
        self.lstm3 = nn.LSTM(input_size=128, hidden_size=64)
        self.lstm4 = nn.LSTM(input_size=64, hidden_size=32)
        self.fc1 = nn.Linear(in_features=32, out_features=128)
        self.fc2 = nn.Linear(in_features=128, out_features=64)
        self.fc3 = nn.Linear(in_features=64, out_features=32)
        self.fc4 = nn.Linear(in_features=32, out_features=3)

    def forward(self, x):
        x = torch.tanh(self.lstm1(x))
        x = torch.tanh(self.lstm2(x))
        x = torch.tanh(self.lstm3(x))
        x = torch.tanh(self.lstm4(x))
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = F.relu(self.fc3(x))
        x = self.fc4(x)
        return x

Good morning @andreiliphd,
I believe LSTM outputs siomething like this : output, (h_n, c_n)
Maybe you only want to apply your tanh on the “output” ? So you should do :

torch.tanh(self.lstm1(x)[0])
1 Like

Good morning lelouedec!

Thank you very much for your help!