Hi,

I have a multiclass classification problem in NLP. For simplicity, let us have just three target classes: ″SPORT″, ″CULTURE″, ″TRAVEL″. Also, let these classes have corresponding labels: 0,1,2.

Let us say that we want during training to **add penalization** in cross-entropy loss **according to some relationship between two initial classes** . For example (just dummy example), if the predicted value has more letters than the expected value, we want to add ‘some value’ in the regularization part of the cross-entropy loss.

For example, let us have some values for a batch of 4 which are input in (only custom?) cross-entropy class:

y: tensor([1,0,2,**2**], device = ‘cuda:0’)

y_hat: tensor([1,0,2,**1**], device = ‘cuda:0’)

In this case, we would like to compare initial values ″CULTURE″ and ″TRAVEL″, and since there is the difference in length, we would add some penalization factor – the value is not important at this moment.

The fundamental question behind this example is – how to use the value of a custom function in cross-entropy function if that function should work on starting labels and return the value to the loss function as a regularization parameter?

I would like to hear advice regarding solving this kind of problem. Some small code snippets would be more than welcome.

Thank you