I am mention the example here http://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html
The code feed one-by-one token into RNN.
for ei in range(input_length):
encoder_output, encoder_hidden = encoder(
input_variable[ei], encoder_hidden)
encoder_outputs[ei] = encoder_output[0][0]
However, after reading the doc, I found that RNN can take a sequence in one go (perhaps faster).
So I modified a code a bit.
class EncoderRNN(nn.Module):
def __init__(self, input_size, hidden_size):
super(EncoderRNN, self).__init__()
self.hidden_size = hidden_size
self.embedding = nn.Embedding(input_size, hidden_size)
self.gru = nn.GRU(hidden_size, hidden_size)
def forward(self, input, hidden):
# embedded = self.embedding(input).view(1, 1, -1)
embedded = self.embedding(input)
output = embedded
output, hidden = self.gru(output, hidden)
return output, hidden
def initHidden(self):
result = Variable(torch.zeros(1, 1, self.hidden_size))
if use_cuda:
return result.cuda()
else:
return result
Then, in training, I can feed whole sentence without loop
encoder_output, encoder_hidden = encoder(input_variable, encoder_hidden)
Please correct my if I am wrong. I also wonder if the for-loop in example is use on purpose or not.