Seq2seq multiple input features(Passing more than one word at a time)

Is there a way to pass extra feature along with the existing word tokens as input and feed it to the encoder RNN?

Lets consider the NMT problem , say I have 2 more feature columns for the corresponding source vocabulary( Feature1 here ). For example, consider this below:

Feature1 Feature2 Feature3
word1 x a
word2 y b
word3 y c
.
.

Moreover, I believe this is glossed over in the seq2seq tutorial(https://github.com/spro/practical-pytorch/blob/master/seq2seq-translation/seq2seq-translation.ipynb) as quoted below:
“When using a single RNN, there is a one-to-one relationship between inputs and outputs. We would quickly run into problems with different sequence orders and lengths that are common during translation…….With the seq2seq model, by encoding many inputs into one vector, and decoding from one vector into many outputs, we are freed from the constraints of sequence order and length. The encoded sequence is represented by a single vector, a single point in some N dimensional space of sequences. In an ideal case, this point can be considered the “meaning” of the sequence.”

Furthermore, I tried tensorflow and took me a lot of time to debug and make appropriate changes and got nowhere . And heard from my colleagues that pytorch would have the flexibility to do so. Please share your thoughts on how to achieve the same in pytorch. Would be great of anyone tells how to practically implement/get this done in pytorch. Thanks in advance.