Siamese network and graph convolution

I’m trying to predict something with the help of the Siamese network and graph convolution. In my network, my inputs are x1 and x2 but I’m not using the output that was generated by x2 for calculating the loss. When x1 and x2 are different, I’m getting completely different output but when x1 and x2 are same I’m getting the expected output. I’m attaching the code below.

class MyGCN(nn.Module):
    def __init__(self, adj, hid_dim, coords_dim=(2, 3), num_layers=4, nodes_group=None, p_dropout=None):
        super(MyGCN, self).__init__()
        _gconv_input = [_GraphConv(adj, coords_dim[0], hid_dim, p_dropout=p_dropout)]
        _gconv_layers = []

        for i in range(num_layers):
                _gconv_layers.append(_ResGraphConv(adj, hid_dim, hid_dim, hid_dim, p_dropout=p_dropout))

        self.gconv_input = nn.Sequential(*_gconv_input)
        self.gconv_layers = nn.Sequential(*_gconv_layers)
        self.gconv_output = ModulatedGraphConv(hid_dim, coords_dim[1], adj) 
    
    # Siamese style net
    def forward_once(self, x):
        out = self.gconv_input(x)
        out = self.gconv_layers(out)
        out = self.gconv_output(out)
        return out

    def forward(self, x1, x2):
        out1 = self.forward_once(x1)
        out2 = self.forward_once(x2)    
        return out1,out2

I’m calling the forward function like below:

outputs,_ = model_pos(x1, x2)

And I’m calculating the loss like this:

loss = (1-lamda)*criterion(outputs, targets) + lamda*criterionL1(outputs, targets)

Can anyone say what could be the possible reason for not getting the expected output when x1 and x2 are different even though I’m not using the output of x2 for calculating the loss?

@ptrblck could u please help?