Do pytorch support multiple outputs with the same inputs?

Hi, Juan

So in this code the common_operation will be invoked twice but during the invocation of operation2 the output of common_operation is somewhat cached?

From my reading of autograd, the gradient accumulates over multiple calls of the model. So I thought in this code 1. there will be redundant computation of common_operation (computed one more time). 2. Two computation graphs are generated. 3. Gradients about common_operation are cumulated.

How does the ‘store’ you are referring to works? Thanks.
(I didn’t wrap it in forward cuz in the model is a generative model with many sub-modules, and there isn’t a clearly defined forward for the model)