Recalculate gradients

Hi All,

I have calculated some weights per label for my dataset. Something like this:

label_weights = {0:2.0, 1:3.5, 2:0.7, 3:1.5}

Next, I want to multiply these weights with the gradient of the loss function according to the label of the input image. Will something like this work:

for x , t in train_loader:
     optimizer.zero_grad ( )
     z = network (x.to (device))
     J = loss (z , t.to (device))
     J.backward ( )
      for p in network.parameters():
           p.grad *= label_weights 
      optimizer.step ( )

Or is there another way to do it?

Thanks for your help in advance :slight_smile:

No, I don’t think your approach would work, since the .grad attributes do not contain any label information, so you won’t be able to multiply the dict with it.
Depending on the used loss function you could use the weight argument (e.g. in nn.CrossEntropyLoss) or you could use an unreduced loss and multiply it before applying the reduction and calling backward().

Thank you for your reply. I found an answer that elaborates on the first method (Passing the weights to CrossEntropyLoss correctly). On that answer, is it enough to pass the weights list without the class labels, even when the labels are one-hot vectors?

nn.CrossEntropyLoss doesn’t work with one-hot encoded targets, but needs a target containing class indices. The passed weights correspond to these class indices, too.