I’m trying to use a Transformer Encoder I coded with weather feature vectors which are basically 11 features about the weather.
I have a data point per day, so this is a time-series but there are no evidences of the time in my vectors and I know I must use positional encoding to tell my network “this vector is from the 21/05/2021, this one is from the 20/04/2019, etc”.
The usual positional encoding with sin & cos used in NLP doesn’t seem to fit my problem as it encodes the position relative to other words in the sentence and my features are independant values (the temperature of the day doesn’t come after the amount of rain for instance).
I don’t really know how to do encode my feature vectors for my transformer encoder to get something like
positional_encoded = original_vector + date_encoding which I think could be great.
What would be the best way to do so ?
Thanks a lot for your answers and help!