How to transpose a tensor and apply it to cross entropy loss?

Hi guys. If the dimension of the tensor is [x, y], then the dimension of the labels for the loss would be [x]. I want to create a loss - crossEntropyLoss([x, y], [x]) + crossEntropyLoss([y, x], [y]). For the second part, I transpose the original [x, y] tensor, but it keeps giving errors. I also tried clone and transpose but nothing works, it keeps throwing the same error.

loss_fn = nn.CrossEntropyLoss()
loss = loss_fn(input, labels) + loss_fn(input.T, labels2)

So the problem is how can I transpose a input and use it in the loss function?
Thanks a lot

Hi Charles!

I am skeptical of your proposed use case. However:

In its most common usage, CrossEntropyLoss takes integer class
labels for its target. These should be integers that run from zero
to the number of classes you have, that is to say, the size of the
nClass dimension of the input. Note that transposing your input
can change which values are valid for your class labels.

Here is an example of transposing the input where the corresponding
class labels are constructed to be valid:

>>> import torch
>>> torch.__version__
>>> _ = torch.manual_seed (2021)
>>> x = 3
>>> y = 5
>>> input = torch.randn (x, y)
>>> labels = torch.randint (y, (x,))
>>> labels2 = torch.randint (x, (y,))
>>> loss_fn = torch.nn.CrossEntropyLoss()
>>> loss_fn (input, labels)
>>> loss_fn (input.T, labels2)


K. Frank

1 Like

I supposed it is the right way to implement this. But it turns out that I’ve tried the same implementation as yours and CUDA gives me an error.