Better solution than lstm(x)[0]

Hello!
In lstm(x)[0] we use only part of the data and it affects a number of epochs trained.
Does a better solution exist?
I mean we have to do something not to lose valuable data from our LSTM.
Edit:
I just realized doing some TF that it’s not a ready layer and some things have to be done with states.
Someone ready to share a ready LSTM layer for PyTorch? :sunny:
I am making a function that I can call to make LSTM for TF.
My struggle in TF. Maybe it’ll inspire someone. If TF code is against rules of the forum I will delete.
Can I post TF code here?

Well, if you call lstm(x)[0] you are using the output of the lstm layer, not its states.
nn.LSTM is a fully working layer, while nn.LSTMCell is only a single cell.
What are you missing? If you have some TF code you would like to port to PyTorch, feel free to post it, and we can have a look at it.

Official tutorial on lstm from PyTorch:

lstm = nn.LSTM(3, 3)  # Input dim is 3, output dim is 3
inputs = [torch.randn(1, 3) for _ in range(5)]  # make a sequence of length 5

# initialize the hidden state.
hidden = (torch.randn(1, 1, 3),
          torch.randn(1, 1, 3))
for i in inputs:
    # Step through the sequence one element at a time.
    # after each step, hidden contains the hidden state.
    out, hidden = lstm(i.view(1, 1, -1), hidden)

And I read somewhere on the PyTorch Forum that says: “Don’t use lstm in Sequential”.
The question is: is it fully function layer or I have to use for loop and feed state into the model to get result.
In TF there no LSTM layer to use in model and think the same situation in PyTorch.
Just slicing the data from lstm(input) caused me to pay attention to the situation.
But I don’t know right answer.
I finished TF implementation of LSTM in TF but I don’t know how to append a row to Tensor in TF, got shape [?, 87] which is not feeding forward because of ?. Problem is simple but I don’t know the answer. Code for TF is working and getting results but you have to change batch_size manually.

I guess similar code should be in PyTorch to run fully functional LSTM.

def RNN(x):
    output = tf.Variable((0, 0), trainable=False, validate_shape=False, dtype=tf.float32)
    lstm_cell = tf.nn.rnn_cell.LSTMCell(3, activation="tanh")
    state = lstm_cell.zero_state(batch_size=27, dtype=tf.float32)
    for number in range(timesteps):
        output, state = lstm_cell(x[number], state)
        print('State: ', state)
    return output