Hy guys, I have two model.
The first is a compound model with fan_resnet50 and a reset ad hoc with 4 basicBlock.
The second model is a resnet50 pretrained.
I must remove the FC layer of these models and do a torch.cat of this 2 model.
So the problem is that because you create this linear during the forward, it will have new random weights at each forward and it’s weights are not passed to the optimizer when you do model.parameters().
Maybe that is what you want.
If you don’t want that and want the Linear to be learnt, you need to push its creation to the __init__ function and have a self.fc = nn.Linear((128*(n_features_x + n_features_y)), 4) and change your forward to do self.fc(z).
You should keep in mind that the forward method in pytorch is actually executing this code every time you do a forward with a given input.
So setting the fc models to identity can be done only once in the __init__ as well.
The cat operation takes Tensors as input, not layers.
I think you meant something like, assuming x is the input for both model 2 and 3: