How to do backward() for a net with multiple outputs?

e.g., in Torch 7 I have a net with the last module an nn.ConcatTable, then I make the gradOutputs a table of tensors and do the net:backward(inputs, gradOutputs)

How to do similar things with pytorch? I tried to backward() for each output, but it complained that backward should not be called multiple times?


I think you want torch.autograd.backward(variables, grads):

It takes a list of variables and a list of gradients (one for each variable).


Yes it is what I want! Thanks!
Hope this can be added to tutorial/examples as this kind of backward() is common for multi-task learning. I thought I had to write code like variable.backward() as the deep learning example.

Hi, the doc you pointed to has a minor problem: the second argument, grad_variables, should be “sequence of tensors”, not “sequence of Variables”.
See your code here:

Thanks for the report, we’ll fix that!

1 Like