TypeError: relu_(): argument 'input' (position 1) must be Tensor, not int

Dear ,

Creating a NN with relu is giving an error. but its working fine on linear one

class LSTM(nn.Module):
def init(self, input_size=1, hidden_layer_size=20, output_size=1):
super().init()

self.hidden_layer_size = hidden_layer_size
self.lstm = nn.LSTM(input_size, hidden_layer_size)
self.relu = nn.functional.relu(hidden_layer_size, output_size),   
                    torch.zeros(1, 1, self.hidden_layer_size))

def forward(self, input_seq):
lstm_out, self.hidden_cell = self.lstm(input_seq.view(len(input_seq), 1, -1), self.hidden_cell)
predictions = self.linear(lstm_out.view(len(input_seq), -1))
return predictions[-1]

model = LSTM()
loss_function = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr = 0.0001)

after this getting an error:

TypeError Traceback (most recent call last)
in
----> 1 model = LSTM()
2 loss_function = nn.MSELoss()
3 optimizer = torch.optim.Adam(model.parameters(), lr = 0.0001)

in init(self, input_size, hidden_layer_size, output_size)
5 self.hidden_layer_size = hidden_layer_size
6 self.lstm = nn.LSTM(input_size, hidden_layer_size)
----> 7 self.relu = nn.functional.relu(hidden_layer_size, output_size) #activation unit # linear # try with relu ###
8
9 self.hidden_cell = (torch.zeros(1, 1, self.hidden_layer_size), #Basic units of LSTM networks are LSTM layers that have multiple LSTM cells.Cells do have internal cell state, often abbreviated as “c”, and cells output is what is called a “hidden state”, abbreviated as “h”.Regular RNNs do have just the hidden state and no cell state. It turns out that RNNs have difficulty of accessing information from a long time ago.

/opt/conda/lib/python3.8/site-packages/torch/nn/functional.py in relu(input, inplace)
1059 return handle_torch_function(relu, (input,), input, inplace=inplace)
1060 if inplace:
→ 1061 result = torch.relu_(input)
1062 else:
1063 result = torch.relu(input)

TypeError: relu_(): argument ‘input’ (position 1) must be Tensor, not int

The issue seems to be related to your other post.
If you want to create an nn.ReLU module, use:

self.relu = nn.ReLU()

Currently you are trying to feed integers to the functional API, which expects a tensor and the optional inplace argument.