e.g., in Torch 7 I have a net with the last module an nn.ConcatTable, then I make the gradOutputs a table of tensors and do the net:backward(inputs, gradOutputs)
How to do similar things with pytorch? I tried to backward() for each output, but it complained that backward should not be called multiple times?
Yes it is what I want! Thanks!
Hope this can be added to tutorial/examples as this kind of backward() is common for multi-task learning. I thought I had to write code like variable.backward() as the deep learning example.
Hi, the doc you pointed to has a minor problem: the second argument, grad_variables, should be “sequence of tensors”, not “sequence of Variables”.
See your code here: