Activation function " TypeError: 'Tensor' object is not callable"

I am trying to use the relu activation function in PyTorch LSTM but getting the error on " Tensor object is not callable. any guideline or help? Can i use the different activation function in forward propogation? but i am using the same activation function in hidden layers and forward propagation. your kind review will be more help full.

class LSTM(nn.Module):
def init(self, input_size=1, hidden_layer_size=20, output_size=1):
super().init()

self.hidden_layer_size = hidden_layer_size
self.lstm = nn.LSTM(input_size, hidden_layer_size)
self.relu = nn.functional.relu(torch.FloatTensor(hidden_layer_size), torch.FloatTensor(output_size))     
self.hidden_cell = (torch.zeros(1, 1, self.hidden_layer_size),    
                    torch.zeros(1, 1, self.hidden_layer_size))
 def forward(self, input_seq):
lstm_out, self.hidden_cell = self.lstm(input_seq.view(len(input_seq), 1, -1), self.hidden_cell)
predictions = self.relu(lstm_out.view(len(input_seq), -1))
return predictions[-1]



model = LSTM()
loss_function = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr = 0.0001)





epochs = 150

 for i in range(epochs):
   for seq, labels in train_inout_seq:
    optimizer.zero_grad()
    model.hidden_cell = (torch.zeros(1, 1, model.hidden_layer_size),
                        torch.zeros(1, 1, model.hidden_layer_size))    
   y_pred = model(seq)

single_loss = loss_function(y_pred, labels)
single_loss.backward()
optimizer.step()

 if i%25 == 1:
print(f'epoch: {i:3} loss: {single_loss.item():10.8f}')

print(f'epoch: {i:3} loss: {single_loss.item():10.10f}')

After that i am getting an error:


TypeError Traceback (most recent call last)
in
6 model.hidden_cell = (torch.zeros(1, 1, model.hidden_layer_size),
7 torch.zeros(1, 1, model.hidden_layer_size))
----> 8 y_pred = model(seq)
9
10 single_loss = loss_function(y_pred, labels)

/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py in call(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
→ 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)

in forward(self, input_seq)
12 def forward(self, input_seq):
13 lstm_out, self.hidden_cell = self.lstm(input_seq.view(len(input_seq), 1, -1), self.hidden_cell)
—> 14 predictions = self.relu(lstm_out.view(len(input_seq), -1))
15 return predictions[-1]

TypeError: ‘Tensor’ object is not callable```

You are trying to define self.relu as a tensor in:

self.relu = nn.functional.relu(torch.FloatTensor(hidden_layer_size), torch.FloatTensor(output_size))     

Note that this call is most likely not, what you are trying to achieve, as the second argument would be used as the inplace argument.
Also, torch.FloatTensorwill create a new tensor in the provided shape with uninitialized memory, so I would recommend to use the factory methods, such astorch.zeros, torch.randn` etc.

The original error is later raised, since you are trying to call the tensor in:

self.relu(lstm_out.view(len(input_seq), -1))

Assuming you would like to use self.relu as a module, use self.relu = nn.ReLU().

As i used the self.relu = nn.ReLU() but how i will give the input size and hidden layer size to that relu function?

As in linear activation function we use the hidden layer size and output size so how we will give the hiiden layer size and output size in relu function?
self.linear = nn.Linear(hidden_layer_size, output_size)

As u guided to use the self.relu = nn.ReLU()so i tried and i cannot see my loss is changing its getting same on every epoch:

epoch: 1 loss: 0.88755035
epoch: 26 loss: 0.88755035
epoch: 51 loss: 0.88755035
epoch: 76 loss: 0.88755035
epoch: 101 loss: 0.88755035
epoch: 126 loss: 0.88755035
epoch: 149 loss: 0.8875503540

The nn.ReLU() module doesn’t have any parameters and will work on any input, so you don’t need to provide a shape for it.