Hey all,

I’m quite new to PyTorch and am currently trying to implement a CNN-based classifier for some multivariate (9 dimensions/axes) timeseries data. I intend to use 1D convolutions and Max pools in the network.

My Dataset class returns each sample (which reflects 125 timesteps) as a 9 x 125 tensor. My (toy) CNN is constructed as described below:

```
self.conv1 = nn.Conv1d(9, 18, kernel_size=3) #9 input channels, 18 output channels
self.conv2 = nn.Conv1d(18, 36, kernel_size=3) #18 input channels from previous Conv. layer, 36 out
self.conv2_drop = nn.Dropout2d() #dropout
self.fc1 = nn.Linear(36, 72) #Fully-connected classifier layer
self.fc2 = nn.Linear(72, 19) #Fully-connected classifier layer
```

And the forward method of the CNN is described as follows:

```
x = F.relu(F.max_pool1d(self.conv1(x), 2))
x = F.relu(F.max_pool1d(self.conv2_drop(self.conv2(x)),2))
#point A
x = x.view(-1, 36)
#point B
x = self.fc1(x)
x = F.relu(x)
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x, dim=1)
```

The dataloader uses a batch size of 64. At point A in the code above, the shape of x is [64,36,29] - as I understand it, 64 samples (in the batch), each with 36 channels and a length of 29 timesteps. I’d now like to reshape this tensor appropriately for use by the Fully-Connected layer(s) - the code above is how the sample code I’m following (the MNIST example) did it. However, it seems that `x.view()`

is not maintaining the batches, instead folding the tensor into this shape: [1856,36] which is definitely not what I want - or what I think I should be getting. Naturally, I get some error:

ValueError: Expected input batch_size (1856) to match target batch_size (64)

But I do not expect the input batch_size to be anything other than 64 - I believe this is happening because I’m using `x.view()`

wrongly.

Any help with this would be highly appreciated. I’m quite new to PyTorch and don’t know what to do about it.

Thanks in advance