Multitask learning framework: how to perform backward pass?

I am new to pytorch (from torch) and I am trying to implement a multitask learning framework. Essentially, I have the following:

err1 = net1(input1)
err2 = net2(input2)

where net2 takes inputs from different layers of net1 and err1 and err2 are inputs to two different loss functions. I am confused as to how to perform backward pass here. I know that one can use torch.autograd.backward(variables, grads), but what are the variables and grads ? Are the variables [err1,err2] ? If so, how do I get the grads ? or Am I completely on the wrong path ? Any help would be appreciated.

1 Like

One method for multi-task learning is to define a new loss function
loss = err1 + a*err2, where is a constant.
loss.backward() will jointly optimize the two nets (net1 and net2) by performing backward pass.
We are calling variable.backward(), which in turn calls autograd.backward

The gradient in the .backword call is the gradient with respect to output (or loss)
For loss.backward() there is no need to provide gradient (as gradient of loss is initialized to 1 by default.

Thanks! I think that should work. Now, a complete newbie question: Is there any implementation of pytorch multitask frameworks available that I can refer to ?

hi, do you find some framework which can be referred to?

I encountered same problem, but I could not find out any useful material to do this.

hi,do you find some examples which can be referred to?