Convolution on tensor slice without memory realloc

I have a 1D tensor: a, to which I want to apply multiple 1D convolutions.

Each of these convolutions has to be applied to a diminishing slice of the same vector: a[:-i], i.e. the next convolution is applied after dropping the last element of a.

Slicing is very inefficient in this case because it reallocs the whole remainder of a, so the overall procedure does not fit in memory. Is there any way to .resize_() a without affecting the autograder or, alternatively, to specify the convolution limits in each convolution layer? (each convolution is done with a different layer)

Thanks,

That shouldn’t be the case, if I understand your use case correctly.
The output tensor will of course allocate new memory, but since the input is just sliced, it should not trigger a copy.
Do you have a code snippet so that we can have a look?

Hi, thanks for your answer. Here is the relevant code:

for i in np.arange(self.resolution-1,-1,-1):
  # Dilated convolutions
  z = self.sliconv[i](m[:,:,:(i+1)])

where m is a 1D vector and self.sliconv is defined as:

      # Filters for sliding convolutions
      self.sliconv = nn.ModuleList()
      self.sliconv.append(
         nn.Sequential(
            nn.Conv1d(in_channels=kernel_size*in_channels, out_channels=out_channels, kernel_size=1, stride=1),
            nn.LeakyReLU(LR_ALPHA)
         ).to(self.device)
      )
      for d in np.arange(self.resolution-1)+1:
         self.sliconv.append(
            nn.Sequential(
               nn.Conv1d(in_channels=kernel_size*in_channels, out_channels=out_channels, kernel_size=2, stride=1, dilation=d),
               nn.LeakyReLU(LR_ALPHA)
            ).to(self.device)
         )