Convolution 1d and simple function


I am trying to implement the
"Time-series modeling with undecimated fully convolutional neural networks, by Roni Mittelman" using pytorch. For that I am using Conv1d on a simple cos function to test my model.
my COS tensor is of the shape : <1,64,1>
and I declare my convolutions as follow :

self.conv1_1 = nn.Conv1d(sequence_lenght,nb_filter, kernel_size, padding=1)

where sequence_lenght is 64 , nb_filter is 150 and kernel_size is 5.

I get the error :

RuntimeError: Expected object of type torch.DoubleTensor but found type torch.FloatTensor for argument #2 ‘weight’

I am getting back into programming with framework and I had a good experience with pytorch in the past but I am really rusty on that :stuck_out_tongue:

Thanks in advance


It is hard to see what you are trying to do with so little code and an error with no stack trace.

Could you post a little more of your code?

At a guess, you need something like this…

self.conv1_1 = nn.Conv1d(1, 150, 5, padding=something_complicated)

The arguments being, in order, 1 feature input per timestep, 150 features generated, …

and you would feed it data of shape (batch_size, features, timesteps), in your case (1, 1, 64).
The padding is complicated because you need the right amount of padding on one side in order to ensure that the output at time t does not see any input from time t+1.

I wrote a subclass of Conv1d that calculates the necessary padding and reshapes the input. You might find it helpful.

1 Like

Thank you very much my mistake was in the input shape and input size :slight_smile:
I have an other stupid questions :
one of my convolution take as input, two outputs from two past Relus. Those inputs must be summed . I did the stupid first thing that came up to my mind which is a simple addition which obviously gives me the error:

RuntimeError: The size of tensor a (60) must match the size of tensor b (56) at non-singleton dimension 2

Any idea ?

thanks a lot for your link I am looking at it to apply it to my code :slight_smile:


Again, I am guessing…
One of these outputs has passed through one Conv1d, the other has passed through two Conv1d’s.
I think the problem is that each Conv1d hasn’t got enough padding, so the input sequence got shortened to 60 timesteps after one Conv1d, and then to 56 timesteps after the two Conv1d’s.

Therefore you can’t add them together because the sequence length doesn’t match up.

I think if you correct the padding used in the Conv1d’s, then the problem will go away.

1 Like

You were completely right :slight_smile: with the right padding if works perfectly. Too bad there is not the:

padding =‘same’

as with Keras, it would have been very useful here.
I eventually made it work and now can predict with some error a random cos like function.

Thanks again.

Have a good one :slight_smile:

In pytorch you can do Conv1d(..., padding=(kernel_size // 2)) which is equivalent to padding=“same” for odd kernel sizes. But that wouldn’t give you causal convolutions.
However Keras now has padding=“causal” for which pytorch has no easy equivalent.

Yeah for my application, the equivalent of same is working perfectly. Thanks a lot man :slight_smile:

I’m confused about this. If I have a 1D data set of size 1 by D and want to apply a 1D convolution of kernel size K and the number of filters is F, how does one do it?

1 Like

self.conv1_1 = nn.Conv1d(1, 150, 5,padding=(kernel_size // 2))

Should do the trick, then use it in the forward function. For the parameters take a look at the doc.
I should do a github for the implementation I did.