Loss function in a Siamese Network

I’m trying to send two images through a siamese network. It looks fine, but I don’t understand very well the loss function CrossEntropyLoss. I’m trying to calculate the similarity between two images, the image could be classified between 5 different classes.
The model returns the results from one image concatenated with the second image

    def forward(self, input1, input2):
        o1 = self.model(input1)
        o2 = self.model(input2)
        final = torch.cat([o1, o2], 1)

        return final

When I train the model I pass two inputs to the network * (lateral, medial) *. The model returns the results of one input concatenated with the result of the second input. This result is a vector that contains the five predictions of one image plus the five predictions of the second image. Since I have to classify the images between five classes the label * (target) * will have values between 0 and 4. I don’t understand how the loss functions works because I’ve predictions that go from 0 to 9 but the labels go from 0 to 4. My question is if this will be a problem with the performance of the model and if it’s how could I fix it.

for lateral, medial, target, file, name in dataloader[phase]:
    lateral, medial, target = lateral.to(device), medial.to(device), target.to(device) 
    optimizer.zero_grad()
    with torch.set_grad_enabled(phase == 'train'):
        output= model(lateral, medial)
        loss = criterion(output, target)
        if phase == "train":
            loss.backward()
            optimizer.step()