LSTM from Keras to pytorch


I use Keras and I try to improve myself on Pytorch
I’m trying to convert this Keras LSTM into a pytorch one

from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Embedding
from keras.layers import LSTM
from keras.datasets import imdb

print(‘loading data’)
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words = 20000)



x_train = sequence.pad_sequences(x_train, maxlen = 80)
x_test = sequence.pad_sequences(x_test, maxlen = 80)

model = Sequential()
model.add(Embedding(20000, 128))
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(1, activation = ‘sigmoid’))

model.compile(loss = ‘binary_crossentropy’,
optimizer = ‘adam’,
metrics = [‘accuracy’]), y_train,
batch_size = 32,
epochs = 15,
verbose = 2,
validation_data=(x_test, y_test))

score, acc = model.evaluate(x_test, y_test,
batch_size = 32,
verbose = 2)

print(‘Test score:’, score)
print(‘Test accuracy:’, acc)

I have some issue with:
the sequence.pad.sequence (I’ve tried torch.nn.utils.rnn.pack_padded_sequence)

Shall I use torch.nn.Dropout(p=0.5, inplace=False) or input the dropout directly in the LSTM parameters?

I also tried to input an self.embedding = nn.Embedding(20000, 128) with x = self.embedding… in the “def forward” but I am not sure wether I did it well.

Could you please help me?