Multi-output regression problem using Feed-forward Neural Network

Hello guys! I’m training a Feed-forward Neural Network (FFNN) with 11 inputs and 3 outputs for regression problem. The FFNN structure is simple, whose hidden layers consist of linear, ReLU (and BatchNorm). The problem is that three outputs do not have same scale, e.g., output 1 & 2 will be within range [-0.1, 0.1] while output 3 will be [-0.001, 0.001]. Hence, when the MSE loss is backpropagated, the contribution of output 3 error will be neglected (too small) which leads to poor learning performance on output 3 (learning of output 1 & 2 is good). I made some explorations and noticed that FFNN (specifically referred to my model, not to be confused with RBFNN that is also feedforward and mentioned below) does not guarantee good performance on multi-output regression in general, especially in cases where outputs are in different scales.

Well, there are some attempts/ideas I tried/own:

  1. I scaled the MSE losses for each output similar to methods mentioned here. It seems working sometimes, but anyways it requires a lot of tuning.
  2. Some papers indicate that Radial Basis Function Neural Network (RBFNN) works better for such circumstances, I’ll try of course, but I’m more curious about how to solve this problem for my FFNN.
  3. Training several FFNNs with single output instead of one with multiple outputs will solve this problem, but it is time-consuming and complicated in process. So, this will not be considered at present.

Hope someone can share some thoughts/experience on multi-output regression using FFNN. Thanks in advance!

you can just map your outputs to normalized values, and target these. in more complex cases, maybe do a whitening transformation instead of scalar scaling.