Skip time-steps in a trained lstm

Hi,
I have trained one LSTM network with sequential data. For example, the data is like this:

d0, d1, d2, d3, d4, d5, d6, ...

In test time I want to use the trained model and feed data like this to it:

d0, d2, d4, d6, ...

Is there any way to do this? I tested to feed zero or d0, d0, d2, d2, d4, d4, d6, d6,... sequence but it doesn’t seem to work.

If you want to skip every second sample, you could just slice the input tensor in the time dimension via [::2], no?

I want to use all data in training, but in test I just want to use every second samples only. But when I do this, outputs of lstm are not good. It seems the LSTM hidden state needs to be updated using both data not just second one. I’m looking for a trick to make it work, in training or in test phase. I’m thinking of input dropout or sth like that.