How to convert Tensorflow into PyTorch ? Please tell me how. thanks

I am a student studying pytorch.

There was a problem converting from TensorFlow to PyTorch.

input signal is ECG signal.

The input I made is [batch_size=128, time_steps=288, input_size=1].

tensorflow code is

def BiRNN(x, _weights, _biases, _keep_prob, _n_hidden):

    x = tf.unstack(x, n_steps, 1)
    lstm_cell1 = rnn.BasicLSTMCell(_n_hidden, forget_bias=1.0)
    lstm_cell1 = rnn.DropoutWrapper(lstm_cell1, output_keep_prob=_keep_prob)
    lstm_cell2 = rnn.BasicLSTMCell(_n_hidden, forget_bias=1.0)
    lstm_cell2 = rnn.DropoutWrapper(lstm_cell2, output_keep_prob=_keep_prob)
    lstm_cell3 = rnn.BasicLSTMCell(_n_hidden, forget_bias=1.0)
    lstm_cell3 = rnn.DropoutWrapper(lstm_cell3, output_keep_prob=_keep_prob)
    lstm_cells = rnn.MultiRNNCell([lstm_cell1, lstm_cell2, lstm_cell3])
    outputs, _ = rnn.static_rnn(lstm_cells, x, dtype=tf.float32)
    output = tf.reduce_mean(outputs, axis=0) 
    fc_pred = tf.matmul(output, _weights['out'])+ _biases['out']
    return fc_pred

cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost, global_step=global_step)

input size is [batch_size = 128, sequence = 288, input_size = 1 ],
hidden_size is 250,
num_layers is 3,
output size is 18

pytorch code is

class BiRNN(nn.Module):
    def __init__(self, input_dim, hidden_dim, output_dim, num_layers, batch_size, device):
        super(BiRNN, self).__init__()
        self.input_dim = input_dim
        self.hidden_dim = hidden_dim
        self.output_dim = output_dim
        self.num_layers = num_layers
        self.batch_size = batch_size
        self.device = device

        self.lstm = nn.LSTM(batch_first=True, input_size=self.input_dim, hidden_size=hidden_dim,
                            dropout=1.0, num_layers=self.num_layers)
        self.linear_1 = nn.Linear(self.hidden_dim, self.output_dim)

    def init_hidden(self):
        hidden_state = torch.zeros(self.num_layers, self.batch_size, self.hidden_dim)
        cell_state = torch.zeros(self.num_layers, self.batch_size, self.hidden_dim)

        return hidden_state.to(self.device), cell_state.to(self.device)

    def forward(self, x):
        (hidden_state, cell_state) = self.init_hidden()
        lstm_out, _ = self.lstm(x, (hidden_state, cell_state))
        lstm_out = torch.mean(lstm_out, dim=1)
        y_pred = self.linear_1(lstm_out)

        return y_pred

loss_function = nn.CrossEntropyLoss()  # loss function
optimizer = optim.Adam(model.parameters(), lr=0.001)  # optimizer

which I made is working. but, The value seems to be wrong
how to convert tensorflow into pytorch??

Any help would be greatly appreciated.

Which values seems to be wrong?
Do you get an error or where are you stuck at the moment?

Thanks for the reply.

The values from the model come out the same. Like below.

y_pred= tensor([[ 0.0298, -0.0148, -0.0226,  ...,  0.0066,  0.0391, -0.0161],
        [ 0.0298, -0.0148, -0.0226,  ...,  0.0066,  0.0391, -0.0161],
        [ 0.0298, -0.0148, -0.0226,  ...,  0.0066,  0.0391, -0.0161],
        ...,
        [ 0.0298, -0.0148, -0.0226,  ...,  0.0066,  0.0391, -0.0161],
        [ 0.0298, -0.0148, -0.0226,  ...,  0.0066,  0.0391, -0.0161],
        [ 0.0298, -0.0148, -0.0226,  ...,  0.0066,  0.0391, -0.0161]],
       device='cuda:0', grad_fn=<AddmmBackward>)
_, output_index = torch.max(y_pred, 1)
output_index = tensor([16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16,
        16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16,
        16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16,
        16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16,
        16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16,
        16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16,
        16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16,
        16, 16], device='cuda:0')
y_true = tensor([ 4, 11, 13, 13, 16,  2,  7, 13, 14,  2,  5,  5,  0,  1,  6, 10,  3,  7,
         1, 14, 12,  9,  6, 13, 16, 15,  1, 11, 13, 12,  6, 15,  3,  2, 13,  8,
         4,  4, 12, 11, 16,  0, 15,  1, 10,  6, 17, 13,  9,  3,  8, 14,  3,  0,
        11,  3,  3,  5, 14,  5,  5, 11, 10,  0,  0, 15,  5, 16,  2, 16,  3, 13,
         1, 15, 13,  0,  3,  3,  6, 17, 16,  1, 15,  0,  5,  0, 15, 13,  1,  4,
         7,  7,  0,  1, 16, 12,  4, 12, 12,  3, 17,  5, 10, 13, 17,  0,  6, 11,
        12,  7, 16, 15,  1,  0,  5,  5, 13,  0,  5, 16,  4,  0,  5,  6, 16, 12,
         0, 11], device='cuda:0')

Thanks for the information.

One difference between both implementations is the usage of BasicLSTMCell and MultiRNNCell in your TF code, while you are using an nn.LSTM in the PyTorch approach.
Maybe it would be easier to stick to the cell approach and use nn.LSTMCell instead?
This would also make is easier to compare the intermediate outputs between both models.

It works using LSTMCell as directed.
Please tell me the difference between LSTMCell and LSTM.