Hello,

I try to make a CNN but somehow the batch_size is interpeted as part of the input dimension.

Here is my main function.

```
if __name__=='__main__':
# Set options
torch.manual_seed(42)
device = 'cpu'
# Parameters
learning_rate = 0.01
batch_size = 2
epochs = 4
# Generate training and validation data
N = [10, 10] # 100 waveforms, 100 noise
trainingData = generate_data(N)
validationData = generate_data(N)
# Get dataset objects
TrainDS = Dataset(*trainingData)
ValidDS = Dataset(*validationData)
# Get Dataloaders
TrainDL = torch.utils.data.DataLoader(TrainDS, batch_size=batch_size, shuffle=True)
ValidDL = torch.utils.data.DataLoader(TrainDS, batch_size=batch_size, shuffle=True)
# Get model
model = get_model().to(device)
# Get loss function
loss_fn = nn.BCEWithLogitsLoss()
# Get optimizer
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
# Epochs
for t in range(epochs):
print(f"\nEpoch {t+1} of {epochs - 1}:")
train(TrainDL, model, loss_fn, optimizer)
evaluation(ValidDL, model)
print("Done!")
```

My goal is to detect a signal inside noise. I wonâ€™t explain the details of the generated data since it doesnâ€™t matter but in the end we get a time series representing 1s of signal with a sampling rate of 2048. Meaning: One sample is a vector of size 2048.

`generate_data()`

takes an argument N whereas N is a â€śtupleâ€ť. We get 10 time series representing a waveform thatâ€™s only noise and 10 time series representing a waveform that is a noisy signal.

`generate_data()`

returns `samples, labels`

. A label tells us if the corresponding sample contains a signal or not.

so `trainingData`

is a tuple `(samples, labels)`

of type `<class 'tuple'>`

and `samples`

as well as `labels`

is of size 10+10=20 and of type `<class 'list'>`

.

Now the Dataset Object is very simple

```
class Dataset(torch.utils.data.Dataset):
def __init__(self, samples, labels):
assert len(samples) == len(labels)
self.samples = samples
self.labels = labels
def __len__(self):
return len(self.samples)
def __getitem__(self, i):
return self.samples[i], self.labels[i]
```

After we have the data sets, we get the data loaders.

Note: I know that I could generate the data directly in the data set object â€śconstructorâ€ť.

Then we get the model, loss function as well as an optimizer and make a loop for epochs.

```
def get_model():
# Sample rate
sr = 2048
return torch.nn.Sequential( # Shapes
nn.Conv1d(1, 32, 16), # 32 x sr
nn.ReLU(),
nn.Linear(sr, 2),
nn.Softmax(dim=1)
)
```

Note: The dimensions here probably are wrong. Iâ€™m still a bit confused about the CNN. Iâ€™ll approach this problem once I solved the one described below.

In each epoch, we train and evaluate the training.

```
def train(TrainDL, model, loss_fn, optimizer):
# Put model in train mode
model.train()
# TrainDL is an iterator. Each iteration gives us a bunch of
# (samples, labels). The size of (samples, labels) depends on batch_size.
# We put the TrainDL iterator in an enumerate to get the key/value pair.
# i_batch describes which batch we currently work through.
# (samples, labels) is the actual data
for i_batch, (samples, labels) in enumerate(TrainDL):
print(f"Batch number {i_batch}")
# Send data to device
samples = samples.to(device)
labels = labels.to(device)
# Reset gradients TODO: Why?
optimizer.zero_grad()
# Compute prediction
labels_pred = model(samples)
# Compute loss
loss = loss_fn(labels_pred, labels)
# Backpropagation
loss.backward()
# Make a step in the optimizer
optimizer.step()
```

Now I get the following error:

RuntimeError: Expected 3-dimensional input for 3-dimensional weight [32, 1, 16], but got 2-dimensional input of size [2, 2048] instead

Note that I have set the batch size to 2 and each sample is of size 2048, so each step in our iterator from our data loaders returns two of our samples i.e. a list of size [2, 2048].

`samples`

in the training loop has type `<class 'torch.Tensor'>`

.

So we actually pass something of dimension [2, 2048] to our CNN but I canâ€™t see how this is wrong. I somehow assume there some magic going on s.t. our model would know that the we have a batch size of 2.

So Iâ€™m confused about the error.