Let’s say I have a tensor [1,2,3,4,5,6]
and I want to apply kernel of [0,1]
to it using conv1d. The slight twist, I want my dilation value to not be constant. In other words, dilation might be equal to [0,1,2]
where they are dilation values.
This would yield the following steps (with a stride of 1 and ignoring padding):
Step 1: [*0,2*,3,4,5]
Step 2: [0,*0*,3,*4*,5,6]
Step 3: [0,0,*0*,4,5,*6*]
Would I have to modify nn.Conv1d to make this possible? If so, how do I add this functionality without a for loop? Would I have to write external CUDA code to make it run in similar speed to dilation convs usually do?