Cross entropy loss for sentence classification

I have a tensor in shape of [ #batch_size, #n_sentences, #scores]. #scores are calculated for each fixed class. To clarify, suppose we have batch size of 1, with 31 sentences and 5 classes that sentences have been assigned to. So the tensor would have the shape of [1, 31, 5]. Now as my target (i.e., true section labels of each 31 sentences), I’d have a tensor in shape of [1, 31]. containing 31 numbers in the range of 0-4. I’m wondering how I can compute cross entropy loss for these (inputs and targets). Any ideas?

I used a method similar to what discussed in I think this works well. Hope it helps anyone facing the same problem!

You don’t really need to view the tensor in a different way. Cross entropy is now equipped to handle multiple dimensions. The code below demonstrates its working.

import torch
import torch.nn as nn
criterion = nn.CrossEntropyLoss()
batch_size = 16
no_of_classes = 5
input = torch.randn(batch_size,no_of_classes,31)
target = torch.randint(0,4,(batch_size,31))

Thank you for the response. Which version of pytorch is this feature comes along? I’m using 1.1.0 and it gives me the error: ValueError: Expected target size (1, 5), got torch.Size([1, 31])…

I’m using 1.4.0. However, I don’t think that is the problem. This feature has been available for as long as I can remember. Going through your question, your tensor is of shape [1,31,5]. I would suggest changing its shape to [1,5,31] before you go ahead. You can do that with x = x.permute(0,2,1) before providing it to cross entropy loss.

1 Like

This works perfectly! Now I can figure out how pytorch handles such cases! Thank you @charan_Vjy :slight_smile: