How to do backward() for a net with multiple outputs?

e.g., in Torch 7 I have a net with the last module an nn.ConcatTable, then I make the gradOutputs a table of tensors and do the net:backward(inputs, gradOutputs)

How to do similar things with pytorch? I tried to backward() for each output, but it complained that backward should not be called multiple times?

2 Likes

I think you want torch.autograd.backward(variables, grads):

http://pytorch.org/docs/autograd.html#torch.autograd.backward

It takes a list of variables and a list of gradients (one for each variable).

7 Likes

Yes it is what I want! Thanks!
Hope this can be added to tutorial/examples as this kind of backward() is common for multi-task learning. I thought I had to write code like variable.backward() as the deep learning example.

Hi, the doc you pointed to has a minor problem: the second argument, grad_variables, should be “sequence of tensors”, not “sequence of Variables”.
See your code here:

Thanks for the report, we’ll fix that!

1 Like