How to use convolutional operations inside pytorch?

Hello and thanks for your help in advance!

As this is my first time posting here, I’ll try to make my post as clear as possible and am greatly sorry if this has already been answered in the past or is common knowledge.

So, I have just recently started getting into ML and have already written some code with tensorflow, but am switiching over to pytorch right now. At some point inside my code, I would like to convolve a spectrum with a coiflet filter (Reconstruction low/high-pass filter), so a linear convolution of two one-dimensional sequences is needed. Previously, I would do that by using tf.nn.convolution, which takes both the spectra and filters as inputs. Pytorch on the other hand uses torch.nn.Conv1d with the number of channels as input.

A similar function to what I would be looking for is numpy.convolve, but here I am faced with the issue that I have to use a number of .cpu().detach().numpy() operations as I am working on a server, which I would like to avoid. Also this seemes to mess up the backpropagation (must likely a bug in my code).

I hope, I was able to state my problem clearly and sorry for any confusions. Mabe someone could help me understand how to adept torch.nn.Conv1d for my needs or how to avoid detaching/switching the data between server and cpu when using the numpy routine. I am pretty new to pytroch and ML, so any help is appreciated!

Some sample code …
dataset_new = tf.nn.convolution(dataset, coiflet, padding='SAME')

for i in range(batch):
        dataset_new [i,:] = torch.from_numpy(np.convolve(dataset[i,:].cpu().detach().numpy(), coiflet, mode='same')).to(device)

So, my question would be:

  • Is there an efficient way to convolve spectra with predefined filters utilizing only pytorch commands?

You could use the functional API via:

import torch.nn.functional as F

data = ... # input data
weight = ... # conv filters
bias = ... # conv bias
out = F.conv2d(data, weight, bias, padding=...)

I would guess it’s not a bug in your code but a known limitation, as using numpy operations will break the computation graph since Autograd is not aware of these ops and cannot track them.