Do I always need to reshape the embedding?

In pytorch seq2seq tutorial: embedded = self.embedding(input).view(1, 1, -1), just wondering do I always need to reshape it to (1, 1, -1) -->(batch_size=1, seq_size=1, input_size)?

No, you don’t. In this tutorial the “words” are fed one by one, so that the output of self.embedding is in this shape. You could also use an arbitrary batch size and sequence length (which would need some changes in the code for this tutorial).
Also note that the GRU expects the input to have the shape [seq_len, batch_size, features], if you don’t specify batch_first=True.

You are right, thanks! In this tutorial, the word’s fed one by one, and “input” here is one word, then reshaped to (seq_size=1, batch_size=1, input_size).