I have three
nn.Module joined by using a
decoder = nn.ModuleList([create_model() for _ in range(3)])
I have one optimizer with the trainable parameters of the 3 decoder (remind is a list of 3 different models)
optimizer = Adam(decoder.parameters(), lr=1e-3, weight_decay=1e-5)
Each decoder ( decoder, decoder and decoder ) has different inputs and I need update their weights separately.
This is a brief version of how my loop looks.
for epoch in range(100): for features in dataloader: # features is a list of 3 inputs. for idx, input in enumerate(features): # Forward pass loss = decoder[idx](input) # input into decoder, input into decoder and input into decoder # Backward pass optimizer.zero_grad() loss.backward() # Adam step optimizer.step()
Am I updating the weight of each model with their corresponding error correctly?
If you prefer. here is a more complex version of my training loop:
for e in range(100): for inputs in dataloader: with torch.no_grad(): features = feature_extractor_resnet50(inputs) # output of layers [1, 2, 3] for idx, input in enumerate(features): # Forward pass loss = decoder[idx](input) # Backward pass optimizer.zero_grad() loss.backward() # Adam step optimizer.step()