Apply torch.nn.functional.conv1d to each row of two tensors

Hi! First time posting in this forum, and it will be with a rather weird question, as I am attempting to use pytorch for something it’s probably not really designed for.

So, the big picture is that I am trying to use Pytorch’s optimizers to perform non-linear curve fitting. I have an overall code that is working, but now I need to tweek things to actually work with the model I am interested in. In details, at some point in my model’s forward function I need to convolve the system impulse response function (which is an exponential), and the system input fucntion (which a known measure), to get the output.

Here is a snippet of code that works:

out = torch.mul(kep,t)
out = torch.exp(-out)
res = torch.squeeze(torch.nn.functional.conv1d(out.view(Nv, 1, Nt),
                                     torch.flip(Cp,dims=(1,)).view(1, 1, Nt), 
                                     padding=(Nt - 1) ))[:,:Nt]

Here is a quick outline of the dimensions of each tensor:

  • kep [Nv,1]
  • t [1,Nt]
  • out [Nv,Nt] <-- system’s impulse response function
  • Cp [1,Nt] <-- system’s input function
  • res [Nv,Nt] <-- system’s output function

So, if I use the same “kernel” Cp as input for all my Nv time-series, this works fine.
What I need to do, instead, is to allow each time series to have a different input (i.e. Cp [Nv,Nt])

If I change my last line as follows:

out = torch.squeeze(torch.nn.functional.conv1d(out.view(Nv, 1, Nt),
                                     torch.flip(Cp,dims=(1,)).view(Nv, 1, Nt),
                                     padding=(Nt - 1) ))

the output I get is of shape [Nv,Nv,Nt].
This makes sense, given that the first dimension of the kernel is interpreted as #output channels, but it’s not what I want to do. I’d like to be able to do a 1d convolution of each row of out with each row of Cp. Is there any way I can do this?


Unfortunately, you won’t be able to give a batch of weights to the conv function as it does not support it.

But you can use the groups parameter. In particular, if your groups = n_channels, then each channel will be only multiplied with one weight. Does that match what you want?

Thanks a lot for the hint!
This made the trick:

out = torch.squeeze(torch.nn.functional.conv1d(out.view(1, Nv, -1),
                                        torch.flip(Cp,dims=(1,)).view(Nv,1, -1), 
                                        padding=(Nt - 1), groups= Nv))

Now I am haveing problems with the fact the during curve fitting 2 out of 4 parameters are not updated, but I guess that’s another separate issue … -.-"