IndexError: Target 10 is out of bounds

Hello there,
as commonly use, from my traning loss I use nn.CrossEntropyLoss(). This raised an error in this line od my code loss = criterion(outputs, labels). Thus, what cause this issue? here is my labels my labels are : tensor([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53]) and signal shape is torch.Size([54, 2, 100, 200],
54 denotes the total number of labels, so each signal size is torch.Size([2, 100, 200]).

Model design code:

self.conv1 = nn.Sequential(
            nn.Conv2d(2, 16, 3, stride=1, padding=1)
        )
        self.conv2 = nn.Sequential(
            nn.Conv2d(16, 32, 3, stride=1, padding=1)
        )
        self.conv3 = nn.Sequential(
            nn.Conv2d(32, 64, 3, stride=1, padding=1)
        )
        self.fc1 = nn.Sequential(
            nn.Linear(19200, 576)
        )
        self.fc2 = nn.Sequential(
            nn.Linear(576, 150)
        )
        self.fc3 = nn.Sequential(
            nn.Linear(150, 80)
        )
        self.fc4 = nn.Sequential(
            nn.Linear(80, 10)
        )
        self.pool = nn.Sequential(
            nn.MaxPool2d(2, 2)
        )

    def forward(self, input_data):
        input_data = self.pool(F.relu(self.conv1(input_data)))
        input_data = self.pool(F.relu(self.conv2(input_data)))
        input_data = self.pool(F.relu(self.conv3(input_data)))
        input_data = torch.flatten(input_data, 1)
        input_data = F.relu(self.fc1(input_data))
        input_data = F.relu(self.fc2(input_data))
        input_data = F.relu(self.fc3(input_data))
        input_data = self.fc4(input_data)
        return input_data

train model :

        for target, labels in trainloader:
            target, labels = target.to(DEVICE), labels.to(DEVICE)

            outputs = model(target.float())
            loss = criterion(outputs, labels)
            optimizer.zero_grad()
            loss.backward()
            optimizer.step()
            running_loss = loss.item()

It is a common error. You seem to be predicting the confidence for 10 classes. It should be 0 <= labels <= 9.

As you have mentioned, the labels are >= 10. Thus, it results in out of bounds error.

Okay, I see. So, what can be my output? any suggestion?

It depends on the number of output classes you have. It depends on your dataset.
For example, ImageNet image classification dataset has 1000 classes, HMDB action recognition dataset has 51 classes.

If I don’t use classification, let’s say I want to self-certify my data because it’s audio signal data without any classes. In this case, is there any need for labels? Or do I have to apply labels? If labels are required, is it necessary to use classes or the usage of target (data) + labels is adequate?

In general, to train a network based on gradients, you need an objective function. One of the examples of this approach is to have class labels and applying cross entropy loss. However, this is not the only way.

If you can define a target for your data and an objective function to measure the target mathematically, you can make a loss function out of it. I’m not sure what could be the target and loss function for your particular use case.
Hopefully someone who knows about your domain may answer your question here.

1 Like

Noted. Thank you Sir for your time and answers