I want to make a custom loss function, but (I think) it is currently not backpropagating. After doing some searching online, it seems like the error is because it is not differentiable. Could I get some help on how to make it differentiable?
This loss function is a simplified version for a more complicated loss function that I would like to use for a multi-label classifier. Here, there are 3 labels (0: Neutral, 1: Positive, 2: Negative). I calculate the loss as follows:
- If the real label is neutral, I penalise positive and negative by 0.5.
- If the real label is positive, I penalise neutral by 0.5 and negative by 1.
- If the real label is negative, I penalise neutral by 0.5 and positive by 1.
def custom_loss_function(output, target):
res = []
for graph_no in range(len(output)):
currOutput = output[graph_no]
currTarget = target[graph_no]
if currTarget == 0: # If real label is neutral
currLoss = (currOutput[1]**2)*0.5 + (currOutput[2]**2)*0.5
res.append(currLoss)
elif currTarget == 1: # If real label is positive
currLoss = (currOutput[0]**2)*0.5 + (currOutput[2]**2)
res.append(currLoss)
elif currTarget == 2: # If real label is positive
currLoss = (currOutput[0]**2)*0.5 + (currOutput[1]**2)
res.append(currLoss)
finalRes = torch.mean(torch.Tensor(res))
return finalRes
My train step:
def train():
model.train()
for data in loader: # Iterate in batches over the training dataset.
out = model(data.x, data.edge_index, data.batch) # Perform a single forward pass.
loss = loss_fn(out, data.y) # Compute the loss.
loss.requires_grad_()
loss.backward() # Derive gradients.
optimizer.step() # Update parameters based on gradients.
optimizer.zero_grad() # Clear gradients.
Any and all help would be greatly appreciated. Thank you!