Fusion/ensembel of two MLP models

In my example you can see that the same input is passed to both models so that the features created by these models are then concatenated before being passed to a final classifier.
In your code snippet you are passing an unknown x2 tensor to self.model_G so I guess this might be a typo?