How to let the network learn the weight of the multi-task loss itself

(1)I set two learnable parameters w1,w2,and add them to optimizer
w1 = Variable(torch.Tensor([1]), requires_grad = True)
w2 = Variable(torch.Tensor([1]), requires_grad = True)

(2)I pass them to the function train()

(3)In train(),
loss = (1/torch.Tensor.exp(w1)) loss_task1 + (1/torch.Tensor.exp(w2)) loss_task2 + w1 + w2
loss.backward()
optimizer.step()

But I found that w1 and w2 have no gradient:
w1.grad is None

I want to know WHY and how to achieve this ‘dynamic loss weight learning’

Variable is deprecated since 0.4. Are you using <0.4 versions?

In order to add those tensors to model paramters you need to use register_parameter method or assign a nn.Parameter rather than tensor.

I don’t really know how Variables used to work.

Sanity check is to verify those variables are listed as model parameters since if they aren’t, you will never pass them to optimizer–> no learning

I use pytorch 1.0

w1 = torch.nn.Parameter(torch.Tensor([1]))
w1.grad is still None

Can you provide a minimal example showing how you are doing it and how it fails?

Go this way
self.register_parameter('w1',torch.nn.Parameter(torch.Tensor([1])))

def main():
    best_prec1 = 0
    lr = 0.0001
    epochs = 20
 
    model = vgg.vgg16(num_classes=1000, pretrained=True)

    model.cuda()

    criterion_A = nn.CrossEntropyLoss().cuda()    # loss of taskA
    criterion_B = nn.CrossEntropyLoss().cuda()    # loss of taskB
    
    #  learnable loss weight param
    log_sigma_A = torch.nn.Parameter(torch.Tensor([1]))     
    log_sigma_B = torch.nn.Parameter(torch.Tensor([1]))

    weight_list = []
    bias_list = []
    last_weight_list = []
    last_bias_list = []
    loss_weight_list = [log_sigma_A, log_sigma_B]

    for name, value in model.named_parameters():
        if 'classifier' in name:
            #print(name)
            if 'weight' in name:
                last_weight_list.append(value)
            elif 'bias' in name:
                last_bias_list.append(value)
        else:
            if 'weight' in name:
                weight_list.append(value)
            elif 'bias' in name:
                bias_list.append(value)

    optimizer = torch.optim.SGD([{'params': weight_list, 'lr': lr},
                     {'params': bias_list, 'lr': lr * 2},
                     {'params': last_weight_list, 'lr': lr * 10},
                     {'params': last_bias_list, 'lr': lr * 20},
                     {'params': loss_weight_list, 'lr': lr}], momentum=0.9, weight_decay=0.0005, nesterov=True)
    for epoch in range(0, epochs):
        scheduler.step(epoch)
        # train for one epoch
        train(train_loader, model, criterion_A, criterion_B, optimizer, epoch, log_sigma_A, log_sigma_B)
def train(train_loader, model, criterion_A, criterion_B, optimizer, epoch, log_sigma_A, log_sigma_B):
            ....
        log_sigma_A = log_sigma_A.cuda()
        log_sigma_B = log_sigma_B.cuda()
        sigma_A = torch.Tensor.exp(log_sigma_A)
        sigma_B = torch.Tensor.exp(log_sigma_B)

        loss = (1/(2*sigma_A))*loss_A+ (1/(2*sigma_B)) * loss_B + log_sigma_A + log_sigma_B
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

log_sigma_A.grad is None log_sigma_B.grad is None

I have put my code blow

It is not registering the parameter with a natural assignment, please use the solution above.

you mean
model.register_parameter(‘w1’,torch.nn.Parameter(torch.Tensor([1]))) ?

Oh, i didn’t see the code at all.

You should be able to pass a tensor rewriting it as

def train():
    log_sigma_A = torch.tensor([1.]).requires_grad_()
    log_sigma_B = torch.tensor([1.]).requires_grad_()

    loss_weight_list = [log_sigma_A, log_sigma_B]
    optimizer = torch.optim.SGD([{'params': loss_weight_list, 'lr': 0.1}])

    for i in range(10):
        sigma_A = torch.Tensor.exp(log_sigma_A)
        sigma_B = torch.Tensor.exp(log_sigma_B)
        loss = sigma_A + sigma_B
        optimizer.zero_grad()
        loss.backward()
        print('A grad: %f' % log_sigma_A)
        print('B grad: %f' % log_sigma_B)
        print('Loss %f' % loss.item())
        optimizer.step()
A grad: 1.000000
B grad: 1.000000
Loss 5.436563
A grad: 0.728172
B grad: 0.728172
Loss 4.142581
A grad: 0.521043
B grad: 0.521043
Loss 3.367565
A grad: 0.352665
B grad: 0.352665
Loss 2.845707
A grad: 0.210379
B grad: 0.210379
Loss 2.468292
A grad: 0.086965
B grad: 0.086965
Loss 2.181716
A grad: -0.022121
B grad: -0.022121
Loss 1.956243
A grad: -0.119933
B grad: -0.119933
Loss 1.773959
A grad: -0.208631
B grad: -0.208631
Loss 1.623389
A grad: -0.289801
B grad: -0.289801
Loss 1.496825

There is learning.
In fact, i rewrote the snippet to emulate little bit better your code

log_sigma_A = torch.tensor([1.]).requires_grad_()
log_sigma_B = torch.tensor([1.]).requires_grad_()
loss_weight_list = [log_sigma_A, log_sigma_B]
optimizer = torch.optim.SGD([{'params': loss_weight_list, 'lr': 0.1}])

def train(log_sigma_A,log_sigma_B):
    log_sigma_A = log_sigma_A.cuda()
    log_sigma_B = log_sigma_B.cuda()

    sigma_A = torch.Tensor.exp(log_sigma_A)
    sigma_B = torch.Tensor.exp(log_sigma_B)
    loss = sigma_A + sigma_B
    optimizer.zero_grad()
    loss.backward()
    print('A grad: %f' % log_sigma_A)
    print('B grad: %f' % log_sigma_B)
    print('Loss %f' % loss.item())
    optimizer.step()
for i in range(10):
    train(log_sigma_A,log_sigma_B)

As you define everything outside train function but keeps working very well.

Can you try to define logs as tensor (lowercase tensor as it’s not the same torch.tensor than torch.Tensor) and using requires_grad true and 1.0 (float) instead of 1?

1 Like

The loss in your code is only sigmaA + sigmaB, without other task loss like cross_entropy loss.
In my code the loss is complicated loss = (1/(2*sigma_A))loss_A+ (1/(2sigma_B)) * loss_B + log_sigma_A + log_sigma_B
When i call loss.backward(), I got no grad in log_sigma_A and log_sigma_B

import torch
from torch import nn
from torchvision.models import vgg16 as vgg

def main():
    best_prec1 = 0
    lr = 0.0001
    epochs = 20

    model = vgg(num_classes=1000, pretrained=False)

    model.cuda()

    criterion_A = nn.MSELoss().cuda()  # loss of taskA
    criterion_B = nn.CrossEntropyLoss().cuda()  # loss of taskB

    #  learnable loss weight param
    log_sigma_A = torch.tensor([1.]).requires_grad_()
    log_sigma_B = torch.tensor([1.]).requires_grad_()

    weight_list = []
    bias_list = []
    last_weight_list = []
    last_bias_list = []
    loss_weight_list = [log_sigma_A, log_sigma_B]

    for name, value in model.named_parameters():
        if 'classifier' in name:
            # print(name)
            if 'weight' in name:
                last_weight_list.append(value)
            elif 'bias' in name:
                last_bias_list.append(value)
        else:
            if 'weight' in name:
                weight_list.append(value)
            elif 'bias' in name:
                bias_list.append(value)

    optimizer = torch.optim.SGD([{'params': weight_list, 'lr': lr},
                                 {'params': bias_list, 'lr': lr * 2},
                                 {'params': last_weight_list, 'lr': lr * 10},
                                 {'params': last_bias_list, 'lr': lr * 20},
                                 {'params': loss_weight_list, 'lr': lr}], momentum=0.9, weight_decay=0.0005,
                                nesterov=True)
    for epoch in range(0, epochs):
        # train for one epoch
        train( model, criterion_A, optimizer, epoch, log_sigma_A, log_sigma_B)

def train(model, criterion_A, optimizer, epoch, log_sigma_AA, log_sigma_BB):

    log_sigma_A = log_sigma_AA.cuda()
    log_sigma_B = log_sigma_BB.cuda()
    sigma_A = torch.Tensor.exp(log_sigma_A)
    sigma_B = torch.Tensor.exp(log_sigma_B)
    predA = model(torch.rand((1,3,224,224),device='cuda:0',requires_grad=True))
    loss_A = criterion_A(predA,torch.rand((1,1000),device='cuda:0'))
    loss = (1 / (2 * sigma_A)) * loss_A + (1 / (2 * sigma_B))+ log_sigma_A + log_sigma_B

    optimizer.zero_grad()
    loss.backward()
    print('Grad A: %f' % log_sigma_AA.grad)
    print('Loss: %f' % loss.item())
    optimizer.step()

main()

Still works with your code

Grad A: 0.940426
Loss: 2.243513
Grad A: 0.939637
Loss: 2.243998
Grad A: 0.939842
Loss: 2.243357
Grad A: 0.938279
Loss: 2.244367
Grad A: 0.935748
Loss: 2.246241
Grad A: 0.939467
Loss: 2.241770
Grad A: 0.939254
Loss: 2.241145
Grad A: 0.937665
Loss: 2.241820
Grad A: 0.942895
Loss: 2.235607
Grad A: 0.934549
Loss: 2.242907
Grad A: 0.937016
Loss: 2.239339
Grad A: 0.937262
Loss: 2.237943
Grad A: 0.932476
Loss: 2.241532
Grad A: 0.934655
Loss: 2.238118
Grad A: 0.933905
Loss: 2.237595
Grad A: 0.935085
Loss: 2.235111
Grad A: 0.936593
Loss: 2.232268
Grad A: 0.939101
Loss: 2.228399
Grad A: 0.938238
Loss: 2.227876
Grad A: 0.939020
Loss: 2.225688

Are you trying to print values after loss backward? before there are no gradients

Your code fails becose you are reassigning log_sigma_A, losing outer function scope.
Inside your function you are pointing to a non-leaf tensor (copy of log_sigma in cuda) rather than original leaf tensor defined outside the function

You don’t need to do anything special. What you are doing is fine but if you want to check gradients inside that function you need to rename log_sigma_A as I did.
log_sigma_A = log_sigma_AA.cuda() Here you can check log_sigma_AA.grad
log_sigma_A = log_sigma_A.cuda() Here you cannot check log_sigma_A.grad as log_sigma_A became a non-leaf tensor which point to outer log_sigma_A. It works but cannot point to original log_sigma_A

Oh god u r right i shouldn’t reassign log_sigma_A.
Thank u very much!

Thank you again, you save me!

Do you know why the grad for sigma_A and sigma_B are the same? I have exactly same setup and got sigma_A and sigma_B of the same value.