Network which splits and joins again

I want to implement a network like a fork-join model. Just like shown in the image

I want to know how to implement it on pytorch?
I have tried the following implementation but I am not getting the desired results. Is their something wrong on the implementation level.

class Network(nn.Module):
    def __init__(self, num_classes):
        super().__init__()
        # pre-trained features
        backbone = vgg16(is_caffe=True)
        
        l7 = [('fc7', nn.Conv2d(512, 1, 1))]
        l8 = [('fc8', nn.Conv2d(3, 3, 1))]
        self.encoder = copy.deepcopy(backbone)
        self.conv1_1_1=nn.Sequential(OrderedDict(l7))
        self.conv1_1_2=nn.Sequential(OrderedDict(l8))

         
    def forward(self, x, weights):
        x = self.encoder(x)
        x=x/x.max()
        #fork here
        # weights do not have any gradient flow thru them
        suppQueryFusion = [torch.add(x,weight.repeat(1, 1, x.size(2), x.size(3))) for weight in weights]
        
        weightedFeats = [self.conv1_1_1(feat) for feat in suppQueryFusion]
        # join here
        concatedFeats = torch.cat( weightedFeats,dim=1)

        x=nn.Sigmoid()(ProbabilityMap)
        x=self.conv1_1_2(x)
        return x

As you are applying the same convolution to each feature self.conv1_1_1, you are not doing what you showed in your picture. You have to reshape everything in the batch dimension and then apply the convolution. The way you are doing is kind of siamese network more than " in parallel with shared weights"

@JuanFMontesinos my aim is to use the same normalized encoded embedding add different weights in each of the branch and do a 1x1 conv to get single channel and combine it

But do you want that convolution to have shared weights? Do you want to apply convolutions with different weigths?

weightedFeats = [self.conv1_1_1(feat) for feat in suppQueryFusion]

This line is the controversial one. The rest is just fine.

could you help me modify the __init__ function have different self.conv1_1_1 for each of the classes?

You can create as many convolutions as you want. You have to know beforehand how many do you need.

self.convolution_name_1 = nn.Conv2d(512, 1, 1)
self.convolution_name_2 = nn.Conv2d(512, 1, 1)

and so on

Sorry I am new to pytorch so all the question. what about the nn.Sequential

Hi, the nn.Sequential is an embedding to group several layers and later on applying them in a sequential way. As you are defining a single operation inside, it’s not really necessary to use them. However it’s fine if you do.
For example
this would be equivalent

__init__
self.conv1 = 2dconvolution
self.conv2 = 2dconvolution
self.conv3 = 2dconvolution
forward(x):
x=self.conv1(x)
x=self.conv2(x)
x=self.con3(x)
__ini__()
self.my_seq_convs = nn.sequential(2dconvolution,2dconvolution,2dconvolution)
forward(x):
x=self.my_seq_convs(x)

Note that it’s pseudo code