I’ve previously written code such as
def forward(self, input): out1 = network1(input1) out2 = network2(input2) embedded_input = torch.cat((out1, out2),1) output = net(embedded_input)
And torch/autograd seems to know how to build the backprop graph in order to train this network.
However, if I define my operations in a for loop, rather than linearly, such as:
def forward(self, input): embedded_input = None for i, network in embedding_networks: out = embedding_networks(input[i]) if embedded_input is None: embedded_input = out else: embedded_input = torch.cat((embedded_input, out),1) output = net(input)
The forward/backwards prop work, but when I look at the parameter of my network (using .parameters() and iterating through their .shap) it seems that the parameters only include the final
net object and not all the objects in the
embedding_networks list through which I first pass my input.
Is this to be expected ? Is there something obviously wrong with the second snippet compared to the next ? How would I best achieve something like what’s shown in the second snippet ?