Correct way to define two encoder modules

Hi,

I am new to pytorch.
When I am defining an encoder and decoder network, why should I create two different classes of the encoder and decoder?

Also I have two auto-encoders.
Is it correct if I define them like this?

class encoder():
     def __init__():
         self.linear_encoder1 = nn.Linear()
         ...
         self.linear_encoder2 = nn.Linear()

    def forward(self,x):
          ..
          return x

How is the above different from:

class encoder1():
     def __init__():
         self.linear_encoder1 = nn.Linear()

    def forward(self,x):
          ..
          return x

class encoder2():
     def __init__():
         self.linear_encoder2 = nn.Linear()
         ...

    def forward(self,x):
          ..
          return x


How does back-prop work for such a case?

I do not now if i have understand you well.

On one side you can create a class instance that defines both the architecture of the decoder and the encoder. Or you can create different classes. In order to backprop just add both parameter list to the list of the parameters to update. For instance:

import torch
import torch.nn as nn
class autoencoder(nn.Module):
      def __init__(self):
             super(autoencoder, self).__init__()
             self.enc1=nn.Linear(784,512)
             self.enc2 = nn.Linear(512,256)
             self.dec1=nn.Linear(256,512)
             self.dec2=nn.Linear(512,784)
             self.f=nn.Sequential(self.enc1,self.enc2,self.dec1,self.dec2)#do it in this way just for clarity
     def forward(self,x):
            return self.f(x)

model = autoencoder()
parameter_list=model.parameters()

Or use different classes for the encoder and decoder

import torch
import torch.nn as nn
class decoder(nn.Module):
        def __init__(self):
                super(decoder, self).__init__()
                self.dec1=nn.Linear(256,512)
                self.dec2 = nn.Linear(512,784)
                self.f=nn.Sequential(self.dec1,self.dec2)#do it in this way just for clarity    
        def forward(self,x):
                return self.f(x)

class encoder(nn.Module):
        def __init__(self):
                super(encoder, self).__init__()
                self.enc1=nn.Linear(784,512)
                self.enc2 = nn.Linear(512,256)
                self.f=nn.Sequential(self.enc1,self.enc2)#do it in this way just for clarity    
        def forward(self,x):
                return self.f(x)

enc = encoder()
dec = decoder()
parameter_list=[for p in enc.parameters()]
parameter_list+=[for p in dec.parameters()]

There is no reason to create different classes for encoder and decoder network. It depends on what you want to do. You can always access your parameter through the named_parameters()attribute. Btw you miss the inheritance from nn.Module when creating the class, the self passed as argument to the methods and the call to super.

Hi,
Thank you for acknowledging.

What about when I want to define two different autoencoders?
In such a scenario, can I have just two classes? One for the encoders(like how I have mentioned before) and one for the decoders? Where encoder1 and decoder1 would belong to autoencoder1 and so on. Am I clear now?

create two instances. As example:

class decoder(nn.Module):
            #code
class encoder(nn.Module):
            #code

#first autoencoder
encoder1=encoder()
decoder1=decoder()

#second autoencoder
encoder2=encoder()
decoder2=decoder()

Ahhh! So it is essential that I create separate instances of encoder1 and encoder2?
What I previously did was

class decoder(nn.Module):
            #code

       forward(inp1, inp2):
             #code
             return output1, output2

class encoder(nn.Module):
            #code

       forward(inp1, inp2):
             #code
             return output1, output2

#for both first and second autoencoder
encoder=encoder()
decoder=decoder()

So how does backprop work for this?

No, it is not essential. Maybe I express incorrect. If you want to define two autoencoders then you need to define two encoder and two decoders (each autoencoder has one encoder and one decoder). You can do it either by defining an autoencoder class or separate classes for encoder decoder (see my example above Correct way to define two encoder modules). Then each autoencoder is a different instance. In your previous example:

#first autoencoder
encoder1=encoder()
decoder1=decoder()
parameters=[for p in encoder1.parameters()]+[for p in decoder1.parameters()]#this is given to an optimizer to perform stochastic optimization

#second autoencoder
encoder2=encoder()
decoder2=decoder()
parameters+=[for p in encoder2.parameters()]
parameters+=[for p in decoder2.parameters()]#this is given to an optimizer to perform stochastic optimization


Note that if the autoencoders does not share parameters and you want the same stochastic optimization hyperparameters you can use the same optimizer to optimize everything (as long as you sum up the cost for both of them)

Ah okei, now I think I have understand you. Yes it is mandatory to define different instances unless you want to share parameters for whatever reason.

Yes. Thats exactly what I meant.

So in my network if I have two different networks and dont want to share my parameters, I have to create them separately. I will try this out. Thank you

Exactly. It depends on what task you want to perform.