Why need on-hot vector as input for seq2seq model

I am just wondering that why we need one-hot vector as an input for seq2seq models.
Why can’t we use basic numeric tokens as an input for encorder or decorder.

Example:

Encorder_input = [3,4,5,1,2,0,0,0] —> Why can’t we use this as input

I see lots of tutorials but all of them input should be one-hot encoded vectors??

Thanks in advance

The PyTorch Seq2Seq tutorial uses token indices as inputs. That should be the default method for anything using nn.Embedding.