Conv1d - Expected 3-dimensional input for 3-dimensional weight [9, 16, 2], but got 2-dimensional input of size [32, 160] instead

Hi Everyone,

I am facing a problem probably in data loading or maybe making some error in Convd layer.

Problem Definition - I have some understanding of AE and now trying to move towards Convolution AE. I have (32000,160) data in a single file and I want to use my ConvDLayer (subclass of Encoder class) and I get the error mentioned in the headline.

originally, I have (32000,16) data but using somelength of signal at a time, I am making 160 feature (for example, response length = 10 , number of sensor = 16, so taking every 10 steps for 16 sensor and making first row, second row and so on - 1st sensor - some values only till 10 response length/sensor values , 2nd sensor some values till 10 response length, 3rd sensor - some values and so on till 16th sensor data)

class ConvDLayer(nn.Module):
    def __init__(self, input_channels, output_channels, kernel_size, dilation, l_in):
        super(ConvDLayer, self).__init__()
        stride = 1
        # dilation = 1
        padding = int((l_in * (stride - 1) - stride + dilation * (kernel_size - 1) + 1) / 2)
        self.conv = nn.Conv1d(input_channels, output_channels, kernel_size, padding=(padding,), dilation=dilation)
        self.batchnorm = nn.BatchNorm1d(output_channels)
        self.activation = nn.ReLU()
        self.dropout = nn.Dropout(p=0.2)

    def forward(self, x):
        x = self.conv(x)
        x = self.activation(x)
        return x

class EncoderAEC(nn.Module):
    def __init__(self, num_sensors, channels, kernel_sizes, dilation, response_length):
        super(EncoderAEC, self).__init__()
        self.layers = nn.ModuleList()
        in_channel = num_sensors
        i = 0
        for channel in channels:
            self.layers.append(ConvDLayer(in_channel, channel, kernel_sizes[i], dilation[i], response_length))
            in_channel = channel
            i += 1

    def forward(self, x):
        for layer in self.layers:
            x = layer(x)
        return x

Kernel_sizes_encoder = (2,)
Channels_encoder= [9]
in_channel = 16

Can anyone please help me here ?

I’m not sure I completely understand the data format but it seems you are passing it as [batch_size=32, features*seq_len=160]? If so, you might want to split dim1 into a temporal and feature dimension and pass the input as [batch_size, channels, seq_len].