Is there a convenience function for forecasting `n` timesteps ahead?

I was just wondering if there is an existing convenience function in Pytorch for forecasting n timesteps ahead, given a model.

For example in the R forecast package, you can fit a model and then indicate forecast(h=10) and it will forecast the next 10 timesteps given the model.

The algorithm itself is very simple–here is a toy example. Say you have a model based on 7 days inputs and you predict the next 3 days as output. Further, say you want to predict the next year, so 365 days. The you would use the last 7 input values from your dataset and predict the next 3 days–meaning you now have the first 3 days of the 365. Then you would append these predictions to the end of your input dataset and predict the subsequent 3 days. So now you have 6 days out of your 365 day prediction. So you would do this predict-and-append loop for 365//3 = 122 timesteps (or perhaps just 121 steps) and obtain your predictions for the full 365.

Now writing the loop is not too difficult, given the algorithm above. But instead of reinventing the wheel, I figured I would check if there is already some sort of convenience function for this in Pytorch. Also, I imagine that I could probably run this prediction loop on the CPU instead of having to push to the GPU each time–that might save some of the data transfer and improve the speed, depending on data volume and number of prediction steps, etc?

Thanks.

Hi,

I am not very familiar with the forecast package but I don’t think we have anything similar in the core lib no.
The main reason being that this kind of forecasting can be done quite differently depending on the model you have.

Also, I imagine that I could probably run this prediction loop on the CPU instead of having to push to the GPU

Not sure what you mean here? Do you have a code sample?
Can’t you push things to the GPU once at the beginning? Or keep everything on the CPU and run it there?

@albanD Thanks for the info. Yeah, that makes sense. You are totally right that this kind of forecasting can be handled differently depending on the scenario, but I figured I would check anyway. Like I said, it is not that hard to implement. I guess my only question would be if there is an easy way to shift the elements in a tensor instead of having to create a copy of the original tensor with shifted elements each time. Like could I use the torch.roll() function, and then replace say the last 3 elements. Or does roll() also create copies.

I can think about the GPU thing. I am still new to Pytorch, so still just trying to figure out the balance between CPU and GPU. But you are right, it is probably best to just run the prediction routine on the GPU and then pull back the final results.

Thanks again for your input and very prompt response :).

It does create a copy IIRC.
But if you need to add new elements at the end anyways, you will have to use something like torch.cat() that does a copy. So you can simply do something like `base = torch.cat([base[3:], new_el], dim=0) to just do a single copy.

1 Like