Some questions regarding autoregressive model

  1. Does autoregressive mean that the output of our model is combined with current input and this combination acts as input to our model for next time step?

  2. If this is not the case, then what is autoregressive model, if this is the case, then what is done to take care, that one wrong prediction does not destroy all the future time step predictions, as if one wrong prediction is made, then it would get combined with input and act as input for next time step, which would lead to further incorrect predictions.

  3. What is a way to predict variable length sequence, and combine this variable length sequence with input sequence to again make variable length sequence predictions, let us say for pixels?

  1. An autoregressive model uses previous output(s) as the new input. The Wikipedia article gives you some more information.

  2. From the same article:

Intertemporal effect of shocks
In an AR process, a one-time shock affects values of the evolving variable infinitely far into the future. […] Continuing this process shows that the effect of \varepsilon _{1} never ends, although if the process is stationary then the effect diminishes toward zero in the limit.

(I removed a large portion of the text due to the format.)

In DL setups teacher forcing might be used during training.
However, I’m not sure if there is any technique applicable during inference.

  1. I’m not familiar a lot with autoregressive models in the DL domain. The “classical” AR models use a single output as the (scaled) input to the model, if I’m not mistaken (my work in the signal processing domain dates back a little further :wink: ).