I have 243 features that give a single output. So I have 16261 like that. my data size is `torch.Size[(16261, 243])`

And, about the batch size, I just used `.unsqueeze(dim=0)`

to have the third dimension, which is the batch size, gives size 1 to the batch. so now my data size has become `torch.Size[(1, 16261, 243])`

.

I can change my batch size to 72 using `.expand()`

Here are my code cells,

```
x_train = x_train_tensor.unsqueeze(dim=0)
x_train = x_train.permute(0,2,1)
x_train = x_train.expand(72, 243, 16261)
print('Input tensor reshaped: ', x_train.shape)
y_train = y_train_tensor
print('output tensor shape: ',y_train.shape)
```

```
Input tensor reshaped: torch.Size([72, 243, 16261])
output tensor shape: torch.Size([16261])
```

Conv1d model

```
# Model
class CNN(nn.Module):
def __init__(self):
super(CNN, self).__init__()
# I have given 243 as the input to the 1st layer because i have 243 features that gives one output. P.S. I'm not sure where i'm right here.
self.layer1 = nn.Sequential(nn.Conv1d(243, 60, kernel_size=5, stride=1, padding=1), nn.ReLU(), nn.MaxPool1d(kernel_size=2, stride=2))
self.layer2 = nn.Sequential(nn.Conv1d(60, 120, kernel_size=5, stride=1, padding=1), nn.ReLU(), nn.MaxPool1d(kernel_size=2, stride=2))
self.drop_out = nn.Dropout(0.5)
self.fc1 = nn.Linear(120*4063, 800)
self.relu_act = nn.ReLU()
self.fc2 = nn.Linear(800, 14)
def forward(self, x, prints = False):
if prints: print('Input shape: ', x_train.shape)
out = self.layer1(x)
if prints: print('Conv1d 1st layer shape: ', out.shape)
out = self.layer2(out)
if prints: print('conv1d 2nd layer shape: ', out.shape)
out = self.drop_out(out)
out = out.view(out.size(0), -1)
if prints: print(' out size after flattening: ', out.shape)
out = F.relu(self.fc1(out))
if prints: print('1st FC layer shape: ', out.shape)
out = F.relu(self.fc2(out))
if prints: print('2nd FC layer shape: ', out.shape)
out = F.log_softmax(out, dim = 1)
return out
```

```
Input shape: torch.Size([72, 243, 16261])
Conv1d 1st layer shape: torch.Size([72, 60, 8129])
conv1d 2nd layer shape: torch.Size([72, 120, 4063])
output size after flattening: torch.Size([72, 487560])
1st FC layer shape: torch.Size([72, 800])
2nd FC layer shape: torch.Size([72, 14])
```

Checking whether it’s working…

```
loss_function = nn.CrossEntropyLoss()
loss = loss_function(prediction, y_train)
```

my Error

```
ValueError: Expected input batch_size (72) to match target batch_size (16261)
```

yes, I just changed the batch size to 72. the error is the same.

could you please help me to sort this out?