Hello there,

as commonly use, from my traning loss I use `nn.CrossEntropyLoss()`

. This raised an error in this line od my code `loss = criterion(outputs, labels)`

. Thus, what cause this issue? here is my labels `my labels are : tensor([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53])`

and signal shape is `torch.Size([54, 2, 100, 200]`

,

54 denotes the total number of labels, so each signal size is `torch.Size([2, 100, 200])`

.

`Model design code:`

```
self.conv1 = nn.Sequential(
nn.Conv2d(2, 16, 3, stride=1, padding=1)
)
self.conv2 = nn.Sequential(
nn.Conv2d(16, 32, 3, stride=1, padding=1)
)
self.conv3 = nn.Sequential(
nn.Conv2d(32, 64, 3, stride=1, padding=1)
)
self.fc1 = nn.Sequential(
nn.Linear(19200, 576)
)
self.fc2 = nn.Sequential(
nn.Linear(576, 150)
)
self.fc3 = nn.Sequential(
nn.Linear(150, 80)
)
self.fc4 = nn.Sequential(
nn.Linear(80, 10)
)
self.pool = nn.Sequential(
nn.MaxPool2d(2, 2)
)
def forward(self, input_data):
input_data = self.pool(F.relu(self.conv1(input_data)))
input_data = self.pool(F.relu(self.conv2(input_data)))
input_data = self.pool(F.relu(self.conv3(input_data)))
input_data = torch.flatten(input_data, 1)
input_data = F.relu(self.fc1(input_data))
input_data = F.relu(self.fc2(input_data))
input_data = F.relu(self.fc3(input_data))
input_data = self.fc4(input_data)
return input_data
```

`train model :`

```
for target, labels in trainloader:
target, labels = target.to(DEVICE), labels.to(DEVICE)
outputs = model(target.float())
loss = criterion(outputs, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
running_loss = loss.item()
```