Here’s a description of my problem:
I want to create a Custom Loss to be able to take into account only some predictions in the loss calculation. But I have a problem with the backward function which doesn’t work and prints:
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
Here is the code:
def my_custom_loss(output, target): # Transform tensor to np.array sub_labels = np.array([item for sublist in target.tolist() for item in sublist]) sub_outputs = np.array([item for sublist in output.tolist() for item in sublist]) # Get index of list of labels with '-1' ind = [index for index,value in enumerate(sub_labels) if value != -1] # Remove elements with '-1' sub_labels = torch.tensor(sub_labels[ind], dtype=torch.float64) sub_outputs = torch.tensor(sub_outputs[ind], dtype=torch.float64) # Compute the loss the_loss = F.binary_cross_entropy_with_logits(sub_outputs, sub_labels) return the_loss
Do you know what is the problem with my custom loss ?
Here’s what I have already tried:
- I tried to remove transformations made to tensor (ie: Create the same function which only return the loss of output and target. This makes the “loss.backward()” work but it’s not the custom function I want to code.
- I tried to to add “the_loss.requires_grad = True” in “my_custom_loss” before the return, it makes the “loss.backward()” work but my model doesn’t learn at all (loss doesn’t change with epoch).
I wish I was clear, if you want more details you can ask me
Thank you and have a nice day !