nn.Embedding layer

Hello guys, I have a stupid question. I am currently building a seq2seq model to predict the time-series data. Both input and target are sinusoidal signals. As I was going through many examples online, pretty much all of them start with an embedding layer. I assume this is because those examples are language translations, which the input text needs to be vectorized. But for my case, where the inputs are numbers already, so I presume that I don’t need embedding layer for both encoder and decoder, is that correct?

Yes, I think so. Good Luck with that!

1 Like

Yes, for your case you indeed won’t need an embedding layer.