I’ve two face image dataset, one for face alignment regression, one for age regression. If I use one model to regression those two task like this:
class Net(nn.Module):
def forward(self, x):
x = self.features(x)
x1 = self.features1(x)
x2 = self.features2(x)
return [x1, x2]
class LossFN(nn.Module):
def forward(self, input, target):
landmark_loss = self.L1Loss1(input[0][0], target[0])
age_loss = self.L1Loss2(input[1][1], target[1])
loss = self.alignment_ratio * landmark_loss + age_loss
return loss
for image, label in dataloader:
image1 = image[0]
image2 = image[1]
output = []
output.append(model(image1))
output.append(model(image2))
loss = loss_fn(output, label)
optimizer.zero_grad()
loss.backward()
optimizer.step()
When I use this process for two task(regression and classification), the loss goes well, but when I use for two regression task, training loss goes well, while the valid loss not. The wired thing is when I switch eval mode to train mode for validating, the loss goes well. Is there any possible reason for this?