I am trying to implement a custom weighted loss based on my labels. I have a regression prediction task, and I found that one value (15.0) is rarely predicted, so I want to have higher loss on that value to force the model to predict it. I looked online and tried to implement this loss, but it isn’t working.
This is what I tried. The given_input comes from the model and target has shape
(130, 1, 144):
def weighted_loss(given_input, target): curr_loss_list =  target_data = torch.squeeze(target.data) given_input = torch.squeeze(given_input.data) for batch_counter, batch_val in enumerate(target_data): for counter, val in enumerate(batch_val): # this is the value we want to penalize more severely if val == 15.0: curr_loss_list.append(((given_input[batch_counter, counter] - val)**2 * 5)) else: curr_loss_list.append((given_input[batch_counter, counter] - val)**2) curr_loss = torch.mean(torch.cat(torch.FloatTensor(curr_loss_list), 0)) return curr_loss # later on in the training code optimizer = torch.optim.Adam(model.parameters(), learning_rate) optimizer.zero_grad() outs = model(image, question) if torch.cuda.is_available(): train_loss = weighted_loss(outs, Variable(label).cuda()) else: train_loss = weighted_loss(outs, Variable(label).cpu()) train_loss.backward() optimizer.step()
The error I am currently getting is:
TypeError: cat received an invalid combination of arguments - got (torch.FloatTensor, int), but expected one of: * (sequence[torch.FloatTensor] seq) * (sequence[torch.FloatTensor] seq, int dim) didn't match because some of the arguments have invalid types: (!torch.FloatTensor!, int)
However, I’m not sure that even if I fix this that this will work and PyTorch will be able to compute the gradients. Is it possible to implement this without writing a custom backward function?