Hi,
I am doing temporal resampling of multivariate time series using a tensor product.
The goal is to transform an input signal which has C channels and Ti time stamps to an output signal with the same number of channels and To time stamps. Each channel is interpolated independently and the output time stamps are a simple linear combination of the input ones.
That is, the time stamp i of an output channel c is
$y_c[i] = \sum_{l} a_c[i,l]x_c[l]$
where x_c is the is the corresponding input channel.
This is of course just a matrix multiplication
$y = x A$
where A is the matrix containing the a_c[i,l] coefficients and y and x are vectors.
To do this with minibatches, a need a tensor product instead of a matrix
product, since x will be a tensor with shape (batch,channels,in_dates)
and A will have shape (batch,in_dates,out_dates) and I want the output
to have shape (batch, channels, out_dates).
I have implemented this as follows (b is for Batch, c for Channels, i for Input time stamps and o for Output time stamps):
y = torch.einsum(‘bci,bio->bco’, x, A)
The problem I am facing is that this is very slow. I guess that building the operation from a string does not allow any optimization and I was wondering if there is a way to implement this using other faster operations. Maybe there is some reshaping, (un)squeezing and broadcasting black magic, but I can’t figure it out!
Thank you for your help.