Implementing custom loss function in pytorch with weights

This is the loss function i want to implement:

def corr_basic_calc(weights, y_pred):

num_classes = len(weights[:,0])

m_c = []

sum_m_c = []

for i in range(num_classes):

    m_c.append(y_pred[:,i])

    sum_m_c.append(torch.sum(m_c[i]))

return num_classes, torch.tensor(m_c,dtype=torch.long), torch.tensor(sum_m_c,dtype=torch.long)

Blockquote

def loss_fn(outputs,targets,weights,uncorrelated_c_pairs):

num_classes, m_c, sum_m_c = corr_basic_calc(weights,outputs)

corr_loss=0

for uncorr_c_pair in uncorrelated_c_pairs:

c1, c2, v = uncorr_c_pair

corr_loss += torch.abs(torch.sum(m_c[c1]*m_c[c2])/sum_m_c[c1]-v)

return torch.mean(((weights[:,0]*(1-targets))+(weights[:,1]*targets))torch.nn.BCEWithLogitsLoss()(outputs, targets),axis=1)+0.05corr_loss/len(uncorrelated_c_pairs)

But m getting this error:
in corr_basic_calc(weights, y_pred)
6 m_c.append(y_pred[:,i])
7 sum_m_c.append(torch.sum(m_c[i]))
----> 8 return num_classes, torch.tensor(m_c,dtype=torch.long), torch.tensor(sum_m_c,dtype=torch.long)

ValueError: only one element tensors can be converted to Python scalars

Can you help me solve it?

def corr_basic_calc(weights, y_pred):
    num_classes = len(weights[:,0])
    m_c = []
    sum_m_c = []
    for i in range(num_classes):
        m_c.append(y_pred[:,i])
        sum_m_c.append(torch.sum(m_c[i]))

    return num_classes, torch.tensor(m_c,dtype=torch.long), torch.tensor(sum_m_c,dtype=torch.long)

check out this.

torch. tensor (data, *, dtype=None, device=None, requires_grad=False, pin_memory=False ) → Tensor

Constructs a tensor with data.

Parameters

data (array_like) – Initial data for the tensor. Can be a list, tuple, NumPy ndarray, scalar, and other types.

you’re using list of tensors as data. it’s incompatible
maybe you can use torch.stack or torch.cat instead.

I tried using torch.stack but instead of getting a single element loss, i got this:

tensor([0.9472, 0.6654, 0.6768, 0.4873, 0.4726, 0.4873, 0.4873, 0.6212, 0.6212,

    0.5999, 0.6359, 0.5999, 0.7993, 0.4873, 0.6212, 0.6768],

   device='cuda:0', dtype=torch.float64, grad_fn=<AddBackward0>)

tensor([0.9482, 0.6664, 0.6778, 0.4883, 0.4736, 0.4883, 0.4883, 0.6223, 0.6223,

    0.6010, 0.6370, 0.6010, 0.8003, 0.4883, 0.6223, 0.6778],

   device='cuda:0', dtype=torch.float64, grad_fn=<AddBackward0>)

tensor([0.9484, 0.6666, 0.6780, 0.4886, 0.4738, 0.4886, 0.4886, 0.6225, 0.6225,

    0.6012, 0.6372, 0.6012, 0.8006, 0.4886, 0.6225, 0.6780],

   device='cuda:0', dtype=torch.float64, grad_fn=<AddBackward0>)

tensor([0.9498, 0.6680, 0.6794, 0.4899, 0.4752, 0.4899, 0.4899, 0.6239, 0.6239,

    0.6026, 0.6386, 0.6026, 0.8020, 0.4899, 0.6239, 0.6794],

   device='cuda:0', dtype=torch.float64, grad_fn=<AddBackward0>)

tensor([0.9501, 0.6683, 0.6797, 0.4902, 0.4754, 0.4902, 0.4902, 0.6241, 0.6241,

    0.6028, 0.6388, 0.6028, 0.8022, 0.4902, 0.6241, 0.6797],

   device='cuda:0', dtype=torch.float64, grad_fn=<AddBackward0>)

tensor([0.9511, 0.6693, 0.6807, 0.4912, 0.4765, 0.4912, 0.4912, 0.6251, 0.6251,

    0.6038, 0.6399, 0.6038, 0.8032, 0.4912, 0.6251, 0.6807],

   device='cuda:0', dtype=torch.float64, grad_fn=<AddBackward0>)

tensor([0.9524, 0.6706, 0.6820, 0.4925, 0.4778, 0.4925, 0.4925, 0.6265, 0.6265,

    0.6052, 0.6412, 0.6052, 0.8045, 0.4925, 0.6265, 0.6820],

   device='cuda:0', dtype=torch.float64, grad_fn=<AddBackward0>

can you help me figure out what I’m doing wrong? aI probably need to correct the last line in my loss function.

i really don’t get what you want to do.
what is uncorrelated_c_pairs?

also , i think the below part will riase an error OR it’s really hard to read

what is mathematical expression for the loss?

BTW, if you’re using torch.max for y_pred , you can’t backward through it.

It is a weighted binary cross entropy loss + label non-co-occurrence loss. weights and uncorrelated pairs are calculated beforehand and passed to the loss function.
loss

This is the loss function.
First compute the set of uncorrelated pairs (as per the training data); Su = {i, j | M(i, j) = 0, i < j, 1 ≤ i, j ≤ q}. For each (i, j) ∈ Su, we sum the model-based conditional co-occurrence probabilities Pˆ(j|i)
and Pˆ(i|j) here

i not sure, but maybe this work.

def corr_loss(weights, y_pred,uncorrelated_c_pairs):
    num_classes = len(weights[:,0])
   
    loss=0
    for uncorr_c_pair in uncorrelated_c_pairs:
          c1, c2, v = uncorr_c_pair
          loss += torch.sum(y_pred[:,c1]*y_pred[:,c2])*((y_pred[:,c1].sum()+y_pred[:,c2].sum()))/(y_pred[:,c1].sum()*y_pred[:,c2].sum())
 
    return loss