Vector output summing to one, best loss function?

I know that in classification problems CrossEntropyLoss can be used. However, CrossEntroyLoss expects class index as input, not a one-hot encoded vector. While one-hot encoded vectors and class indices can be mapped to each other one-to-one, in my case I have vector outputs (in training data) that do not necessarily assign 100% of the weight to one element and 0% of the weight to all other elements. Instead the weights can be arbitrarily distributed into the vector elements. This makes it impossible to map the output into integer index without loss of information and so CrossEntropyLoss cannot be used. I thought of simply using MSEloss, but it seems to me that this does not sufficiently take into account the property that all vector entries in the outputs must sum up to 1 (softmax output layer). Is there a loss function in pytorch that would be more appropriate in this case?

@KFrank shared an implementation for “soft-labels” here.