Custom loss with trainable parameters

I want to make a loss module with trainable parameter, so I made the following:

class CostumLoss(torch.nn.Module):
    def __init__(self):
        self.gamma = torch.nn.Parameter(torch.FloatTensor([.5]))

    def forward(self, od_loss, depth_loss):
        loss = od_loss + self.gamma * depth_loss
        return loss
c_loss = CostumLoss()
total_loss = c_loss(od_loss , depth_loss)

but printing the learnable parameter gamma tells that the parameter didn’t change.

The trainable parameters will be updated by the optimizer, once gradients were calculated in the backward() pass and optimizer.step() was called.
In your current code snippet you are printing the value of self.gamma before and after using it in the forward, so it’s expected that these values weren’t changed yet.

Okay But I have two optimizers one for each loss ( there is two models but I trained them end to end ), so I think (please correct me) I won’t put the CustomLoss parameters in either of theme and I will make its own optimizer.

Yes, you could create new optimizers for these parameters, if wanted.

Thanks, I appreciate your help.