Suppose I have a tensor output
, which is the result of my network and a tensor, ‘target’, which is my desired output:
output.shape = torch.Size([B, C, N])
target.shape = torch.Size([B, C, N])
I am interested that the network predicts a given N
correctly, and not the particular permutation of C
that is given by the network as output, as this cannot be ordered.
For this reason, I would like to calculate the loss for each possible permutation of C
in output
and input
respectively, taking the minimum possible overall loss.
To demonstrate in normal code what I want to do, it would be written in normal Python script as follows:
import torch
def Loss(target, output):
loss = 0
min_loss=0
#Calculate minimum MSE and add to loss value
for b in range(target.shape[0]):
for c_i in range(target.shape[1]):
for c_ii in range(target.shape[1]):
loss_temp = torch.sum(target[b, c_i] - output[b,c_ii])**2)
if(c_ii == 0 or loss_temp < min_loss):
min_loss = loss_temp
loss = loss + min_loss
#Calculate mean over batches
loss = loss/target.shape[0]
return loss
Is there a more elegant, PyTorch-oriented way of perform this operation?