CrossEntropyLoss Problem for multi-class classification

I’m trying to train a classification model: multi-class classification
The loss function used is nn.CrossEntropyLoss()

The batch size is 8 and the number of classes to classify is 6.

The target labels aren’t one-hot encoded - (Checked it)


This image shows how the target is read from the dataframe and the shape of the target is printed in terminal.

The target tensor seems to have an additional dimension and its shape looks like [batch_size, 1]. Remove dim1 for a multi-class classification via label = label.squeeze(1) and it should most likely work.

PS: you can post code snippets by wrapping them into three backticks ```, which would make debugging easier :wink:

1 Like

Thank you I figured it out