Running the same network multiple times

Hello,

I was wondering whether the graph that gets constructed for backpropagation can update loss/gradients that have been computed independently from multiple runs of the same network model.

For example, assuming we have a vgg architecture, and we run multiple images through the model, applying a different loss on each output, would pytorch calculate the updates properly (such that the network is guided by all losses, not just the most recent one overwriting the gradients)?

Hello,

in general, yes, but it is expensive in terms of memory. Doing several .backward calls (but only one optimizer .step) may be a more memory-efficient way produce the same result.

Best regards

Thomas

1 Like

probably best to just to compute all loses through just one combined cost function so loss incorporates all three loss outputs and then bpp on that loss