Customized loss function

I want to customize my loss function as simple as following.
mb_out is the output from the model forward computation, and it has size (batch_size, 1). They should all reach one in the best case, so I define the following loss function. But seems the loss is not decreasing during training, so I am wondering if this way of defining loss is wrong.

optimizer = optim.Adam(model.parameters(), lr=0.01)
loss = (1 - torch.sum(mb_out, 1)).sum() / float(batch_size)
optimizer.zero_grad()
loss.backward()
optimizer.step()

It look like that your loss is wrongly define. Theoretically, you are doing regression, so you could use MSE loss.

But, trying to modify you loss, is should look like that:

def lossOne(prediction):
      batch_size = prediction.size(0)
      gt    = torch.ones(batch_size).type_as(prediction)
      loss = (gt - prediction).sum() / float(batch_size)
      return loss

The main problem in your implementation was using double sum. So, interpreting your loss, you would like to the sum of output in current batch was 1, what does not make sense. In my implementation, each output want to be one.

But this loss also would not work great, in my opinion. Why? Because this loss can have value from [-infitity, +infinity]. Right loss would have range [0,+infinity]. This is why it is recommended to use MSELoss. It have right range of values. As target just use vector with all ones, like I do in the first line of function.

1 Like