[Solved]How to define the number of linear layer as a parameter in DNN class initialization

Hi there,

I have a simple question. I want to build a simple DNN, but have the number of linear layer passed in as a parameter, so that the users can define variable number of linear layers as they see fit. But I have not figured out how to do this in pytorch. For example, I can easily define a three layer DNN like this,

class DNN(nn.Module):
    def __init__(self, nb_units, input_dim, output_dim)
        super(DNN, self).__init__()
        self.fc1 = nn.Linear(input_dim, nb_units)
        self.fc2 = nn.Linear(nb_units, nb_units)
        self.fc3 = nn.Linear(nb_units, output_dim)  

    def forward(self, x):
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = F.sigmoid(self.fc3(x))
        return x

Now I also want to be able to pass the number of layers as a parameter as well, I have tried this solution:

class DNN(nn.Module):
    def __init__(self, nb_layers, nb_units, input_dim, output_dim)
        super(DNN, self).__init__()
        self.nb_layers = nb_layers
        fc = []
        for i in range(nb_layers):
           if i == 0:
                fc.append(nn.Linear(input_dim, nb_units))
           elif i == nb_layers-1:
                fc.append(nn.Linear(nb_units, output_dim))
           else:
                fc.append(nn.Linear(nb_units, nb_units))
        self.fc = fc

    def forward(self, x):
         for i in range(self.nb_layers):
            if i == self.nb_layers-1:
                 x = F.sigmoid(self.fc[i](x))
            else:
                 x = F.relu(self.fc[i](x))
         return x

You can see that I essentially put layer definitions in a list and use them one by one in this forward call. But with this approach. Pytorch gave me an error. So can anyone gives me some help for this problem? How can do what I want to do with in Pytorch?

Thanks a lot!

Oh my.

So many indent problems in your code.
You should have put your error code here.

Your code is fine, I think you forgot to set self.nb_layers.

Also, I think it is better to use torch.nn.Sequential

The editor is really hard to use… I also got some indent problems with pasting my code.

import torch
from torch import nn
from torch.autograd import Variable
from torch.nn import functional as F


class DNN(nn.Module):
def __init__(self, nb_layers, nb_units, input_dim, output_dim):
    super(DNN, self).__init__()
    fc = []
    self.nb_layers = nb_layers
    for i in range(nb_layers):
        if i == 0:
            fc.append(nn.Linear(input_dim, nb_units))
        elif i == nb_layers-1:
            fc.append(nn.Linear(nb_units, output_dim))
        else:
            fc.append(nn.Linear(nb_units, nb_units))
    self.fc = fc

def forward(self, x):
    for i in range(self.nb_layers):
        if i == self.nb_layers-1:
             x = F.sigmoid(self.fc[i](x))
        else:
             x = F.relu(self.fc[i](x))
    return x

a = DNN(3, 100, 500, 500)

input = Variable(torch.Tensor(10, 500))

output = a(input)

Should be
self.fc = nn.ModuleList (fc) or you use ModuleList instead of [].

Best regards

Thomas

1 Like

HI Zeng,

THanks for your reply. I also feel the code editor is quite hard to use.

But the problem with my code is not that I did not define self.nb_layers. I defined that in my code, I just did not copy that in this post.

I achieve what I want to do with Thomas’ s solution.

CHeers

HI Thomas,

I tried with your answer and my code now works fine.

Thanks a lot

Shuokai