I’m trying to refactor a code that uses lstm into lstmcell
it’s sniippet is:
def forward(self, sentence): embeds = self.embeddings(sentence) # out = out.view(self.sequence_len, embeds.size(0), -1) lstm_out, _ = self.lstm(sentence) None # # Run through the lstm. lstm_out, _ = self.lstm(embeds.view(len(sentence), 1, -1)) # Activate the ReLU on the linear layer's output. x = self.activation(self.linear1(lstm_out[-1].view(1, -1))) # For the second linear layer. x = self.linear2(x) return F.log_softmax(x, dim=-1)
can u assist?