Using the same encoder & inception modeule with 2 different tensor inputs

Morning,

I think i am getting myself into a pickle. I have an input tensor [-1,1,x] which i then split into 2 tensors of size [-1,1,x/2]. I would like to then pass these 2 tensors over the same CNN and then concatanate back before going into the FC layers. I have used torch.split and this gives me tuple which is then allocated to 2 input tensors (data_a and data_b).

do need to pass these using model.encoder(data_a) and model.encoder(data_b)?

If i do need to do that then do i need to then copy the the statements in forward so that i end up with a list of entries for data_a and data_b? This strarts to get very messy very quickly, is there a neater way? For example is it possible to use use tuples all the way through the forward function???

    def forward(self, data_a,data_b):
       
        data_a=self.conv1(data_a)
        data_a=self.bn1a(data_a)
        data_a=self.DP1(data_a)

        data_a=self.in1(data_a) #this is the inception modulue for strand data_a
        #REPEAT ABOVE FOR DATA_B
       data_b=self.conv1(data_b)
        data_b=self.bn1a(data_b)
        data_b=self.DP1(data_b)

        data_b=self.in1(data_b)   #this is the inception modulue for strand data_b
        
        #CONCACTANTE DATA STREAM A & B
        data= torch.cat((data_a,dat_b),1)
        
        data=self.conv1e(data)
        data=self.bn1e(data)
        data=self.DP1e(data)

                
        data=self.HT(self.fc1(data))
        data=self.HT(self.fc1a(data))
        data=self.HT(self.fc2(data))
        data=self.fc3(data)
        #THIS IS WHERE I GET COFUSED BECAUSE I NEED TO RETURN Z_LOC &     Z_SCALE TO MODEL.ENCODER, MODEL.ENCODER CAN ONLY HAVE A SINGLE INPUT (DATA_A OR _DATA_B, AND HAVING 2 ENCODERS WOULD GIVE 2 Z_LOC & Z_SCALE, CAN I SUM THE 2 AND GET THE SAME ANSWER?
        z_loc=self.fc31(data)
        z_scale=self.fc32(data)
        
        return z_loc, z_scale

I have tried using:

data_s=torch.split(data,512,dim=2)
        for split in data_s: 
            data = data.cuda()
            z_loc, z_scale = model.Encoder(data)
            z = model.reparam(z_loc, z_scale)
            out = model.Decoder(z)
            loss = loss_fn(out, data, z_loc, z_scale)
            optimizer.zero_grad()
            loss.backward(retain_graph=True)
            optimizer.step()

But i am not seeing anything which suggests that its passing each of the data blocks through as seperate fields, how can i check this?

It looks like you are not using split, but data as the input to your model.
The splitting and loop should generally work, if you pass split to your model and concatenate the output afterwards.

hi PtrBlck,

apologies for the delay thanks for spotting the deliberate mistake :slightly_smiling_face:

cheers,

chaslie