Hi! First time posting in this forum, and it will be with a rather weird question, as I am attempting to use pytorch for something it’s probably not really designed for.
So, the big picture is that I am trying to use Pytorch’s optimizers to perform non-linear curve fitting. I have an overall code that is working, but now I need to tweek things to actually work with the model I am interested in. In details, at some point in my model’s forward function I need to convolve the system impulse response function (which is an exponential), and the system input fucntion (which a known measure), to get the output.
Here is a snippet of code that works:
out = torch.mul(kep,t)
out = torch.exp(-out)
res = torch.squeeze(torch.nn.functional.conv1d(out.view(Nv, 1, Nt),
torch.flip(Cp,dims=(1,)).view(1, 1, Nt),
padding=(Nt - 1) ))[:,:Nt]
Here is a quick outline of the dimensions of each tensor:
-
kep
[Nv,1]
-
t
[1,Nt]
-
out
[Nv,Nt]
<-- system’s impulse response function -
Cp
[1,Nt]
<-- system’s input function -
res
[Nv,Nt]
<-- system’s output function
So, if I use the same “kernel” Cp
as input for all my Nv
time-series, this works fine.
What I need to do, instead, is to allow each time series to have a different input (i.e. Cp [Nv,Nt])
If I change my last line as follows:
out = torch.squeeze(torch.nn.functional.conv1d(out.view(Nv, 1, Nt),
torch.flip(Cp,dims=(1,)).view(Nv, 1, Nt),
padding=(Nt - 1) ))
the output I get is of shape [Nv,Nv,Nt]
.
This makes sense, given that the first dimension of the kernel is interpreted as #output channels, but it’s not what I want to do. I’d like to be able to do a 1d convolution of each row of out
with each row of Cp
. Is there any way I can do this?