Trainable weights

Hello everyone!
I am working with PyTorch and implementing a semantic segmentation method. The loss function used is a combination of 6 loss terms, i.e. L= L1+L2+L3+L4+L5+L6. Now my requirement is to use trainable weights with these 6 loss terms, which means L= w1L1+w2L2+w3L3+w4L4+w5L5+w6L6.
Now in order to get trainable weights, I am using torch.autograd() class, means
w1 = torch.randn(x.size(),require_grad=True), like this I am doing for all the 6 weights.
Does this method takes care of backpropagation ? or Is this the correct way to use the trainable weights in PyTorch?

1 Like

The way of creating these weights should be correct and the optimizer should be able to update them, if you pass them properly to it.

However, wouldn’t the optimizer try to force these weights to zeros (or rather negative values), as it would decrease the loss?