I am training a model for classification where each ground truth has some uncertainty and is thus a vector of probability scores e.g. [0.1, 0, 0.7, 0, 0.2, 0, 0] instead of [0,0,1,0,0,0,0]. The cross-entropy loss in PyTorch however, accepts only an integer target so I was hoping if someone could recommend a solution or an alternative loss function that is suitable for my classification problem.

(As an aside, your title for this thread, â€śCross entropy with logit
targetsâ€ť should probably be â€śCross entropy with probability targets,â€ť
since you ask about â€śprobability scoresâ€ť and use probabilities in the
example you give.)

Your ground-truth target probabilities are what are sometimes called
â€śsoft labels.â€ť Pytorchâ€™s CrossEntropyLoss does not support this kind
of target, but you can write a version that does. For details, please
see this post:

Update: from version 1.10, Pytorch supports class probability targets in CrossEntropyLoss, so you can now simply use:

criterion = torch.nn.CrossEntropyLoss()
loss = criterion(x, y)

where x is the input, y is the target. When y has the same shape as x, itâ€™s gonna be treated as class probabilities. Note that x is expected to contain raw, unnormalized scores for each class, while y is expected to contain probabilities for each class (typically the output of the softmax layer). You can find details in the docs.