Minimum kernel size convolution solved by striding ?!

I came across a think I don’t understand with nn.Conv1d:

   import torch.nn as nn
   from torch import autograd
   input = autograd.Variable(torch.randn(1, 1, 2))
   conv = nn.Conv1d(1, 1, 3)
   conv(input)

This logically throws an error:

RuntimeError: Given input size: (1 x 1 x 2). Calculated output size: (1 x 1 x 0). Output size is too small at /pytorch/torch/lib/THNN/generic/SpatialConvolutionMM.c:45

But this works and I can’t understand why:

   import torch.nn as nn
   from torch import autograd
   input = autograd.Variable(torch.randn(1, 1, 2))
   conv = nn.Conv1d(1, 1, 3, stride=2)
   conv(input)

output:

Variable containing:
(0 ,.,.) =
1.5060e+36
[torch.FloatTensor of size 1x1x1]

Why does striding (useless in this case) solves the issue of kernel size ?

the stride=2 case shouldn’t technically work. Are you see, the output is uninitialized.
We’ll fix our error checking code to catch this as well. thanks for reporting it.

Ok cheers ! (I had not seen the 1e-36 ;))