How can i use lists to make my model more "pythony" and pretty?

I want to create a specific number of Linear Layers depending on a parameter to do a ternary classification for every output neuron

Just like this small example i made:

class Classifier_OneHot(nn.Module):

    def __init__(self,input_size,hidden_size,output_size):
        super().__init__()

        self.nC=output_size 
        self.ic1 = nn.Linear(input_size, hidden_size) 

        self.fc1=nn.Linear(hidden_size,3)
        self.fc2=nn.Linear(hidden_size,3)
        self.fc3=nn.Linear(hidden_size,3)
        self.fc4=nn.Linear(hidden_size,3)

    def forward(self, x):

        hlo=F.relu(self.ic1(x))
        
        a1=self.fc1(hlo)
        a2=self.fc2(hlo)
        a3=self.fc3(hlo)
        a4=self.fc4(hlo)

        out=torch.stack((a1,a2,a3,a4),axis=-1)

        return out

However, this is not as dynamic and pretty as storing the Tensors and Layers in a list like this

class Classifier_OneHot_list(nn.Module):
 
    def __init__(self,input_size,hidden_size,output_size):
        super().__init__()

        self.nC=output_size 
        self.ic1 = nn.Linear(input_size, hidden_size) 

        self.layerlist=[nn.Linear(hidden_size,3) for i in range(self.nC)]
        
    def forward(self, x):

        hlo=F.relu(self.ic1(x))
        
        self.TensorList=[self.layerlist[i](hlo) for i in range(self.nC)]
        
        out=torch.stack(tuple(self.TensorList),axis=-1)

        return out

The Problem:

They arent the same model and don’t yield the same results. Even though the logic and the shape of the output is the same. In fact, the cross-entropy loss metric is 10 times bigger for the list-Model. It’s a bad model.

Also when you print the list-model you get :

Classifier_OneHot_list(
  (ic1): Linear(in_features=2, out_features=10, bias=True)
)

Whereas for the original model you get kinda what I want:

Classifier_OneHot(
  (ic1): Linear(in_features=2, out_features=10, bias=True)
  (fc1): Linear(in_features=10, out_features=3, bias=True)
  (fc2): Linear(in_features=10, out_features=3, bias=True)
  (fc3): Linear(in_features=10, out_features=3, bias=True)
  (fc4): Linear(in_features=10, out_features=3, bias=True)
)

I really love PyTorch for the dynamic and “pythony” approach, I hope there is a way to implement the original model as intended.

-Benjamin

You should use nn.ModuleList and add the linear layers to it instead of using a Python list, as the latter won’t properly register the modules (as seen in the print statement).
Besides that, your code should be working fine. :slight_smile:

This worked perfectly! Thank you!

For anyone interested this is the final code:

class Classifier_OneHot_list(nn.Module):
 
    def __init__(self,input_size,hidden_size,output_size):
        super().__init__()

        self.nC=output_size 
        self.ic1 = nn.Linear(input_size, hidden_size) 

        self.linears = nn.ModuleList([nn.Linear(hidden_size, 3) for i in range(self.nC)])

    def forward(self, x):

        hlo=F.relu(self.ic1(x))
        
        self.TensorList=[self.linears[i](hlo) for i in range(self.nC)]
        
        out=torch.stack(tuple(self.TensorList),axis=-1)

        return out