Hello all.
I have three nn.Module
joined by using a nn.ModuleList
decoder = nn.ModuleList([create_model() for _ in range(3)])
I have one optimizer with the trainable parameters of the 3 decoder (remind is a list of 3 different models)
optimizer = Adam(decoder.parameters(), lr=1e-3, weight_decay=1e-5)
Each decoder ( decoder[0], decoder[1] and decoder[2] ) has different inputs and I need update their weights separately.
This is a brief version of how my loop looks.
for epoch in range(100):
for features in dataloader: # features is a list of 3 inputs.
for idx, input in enumerate(features):
# Forward pass
loss = decoder[idx](input) # input[0] into decoder[0], input[1] into decoder[1] and input[2] into decoder[2]
# Backward pass
optimizer.zero_grad()
loss.backward()
# Adam step
optimizer.step()
Am I updating the weight of each model with their corresponding error correctly?
If you prefer. here is a more complex version of my training loop:
for e in range(100):
for inputs in dataloader:
with torch.no_grad():
features = feature_extractor_resnet50(inputs) # output of layers [1, 2, 3]
for idx, input in enumerate(features):
# Forward pass
loss = decoder[idx](input)
# Backward pass
optimizer.zero_grad()
loss.backward()
# Adam step
optimizer.step()