Why the exec sentence was not implemented in the forward function of nn.Module?

I was building a neural network model and trying to allow an arbitrary number of hidden layers. The sizes of hidden layers are given in a list. For example hidden_size = [50, 20] means that there are three hidden layers with sizes 50 and 20, respectively.

My code is pasted below.

class NeuralNet(nn.Module):
    """
    neural network class
    """
    def __init__(self, input_size: int, hidden_size: List[int], output_size: int):
        """
        initialization function
        :param input_size: input data dimension
        :param hidden_size: list of hidden layer sizes, arbitrary length
        :param output_size: output data dimension
        """
        super().__init__()
        self.input_size = input_size
        self.hidden_size = hidden_size
        self.output_size = output_size
        self.relu = nn.ReLU()
        self.softmax = nn.Softmax(dim=1)
        """layers"""
        self.input = nn.Linear(self.input_size, self.hidden_size[0])
        for h in range(len(hidden_size) - 1):
            exec(f"self.hidden{h} = nn.Linear(self.hidden_size[h], self.hidden_size[h+1])")
        self.output = nn.Linear(hidden_size[-1], output_size)

    def forward(self, x: torch.Tensor) -> torch.Tensor:
        """
        forward propagation process, required by the nn.Module class
        :param x: the input data
        :return: the output from neural network
        """
        x = self.input(x)
        x = self.relu(x)
        #x = self.hidden0(x)
        #x = self.relu(x)
        for h in range(len(self.hidden_size) - 1):
            exec(f"x = self.hidden{h}(x)")
            x = self.relu(x)
        x = self.output(x)
        x = self.softmax(x)
        return x

I gave an input size of 100, a hidden_size of [50, 20] and an output_size of 2. We should have an input layer of size 10050, a hidden0 of size 5020 and an output of size 202. As expected, the layer named hidden0 is defined without any error in the init function with the exec argument. However, it seems that the model skips the exec argument in the forward step. There are two reasons that I doubt this. The first reason is that the error says size mismatch in torch.addmm, where the two matrices have dimensions 20050 and 20*2, respectively, where 200 is the batch size. Another reason is that if I use the commented code instead of the for loop, everything works perfectly.

I’m wondering why this happens and what is a possible solution for using the for loop to allow an arbitrary number of hidden layers.

[recommend] I think you can use nn.ModuleList instead of exec.
example:

# function __init__
self.hidden_list = nn.ModuleList()
for h in range(len(hidden_size) - 1):
    self.hidden_list.append(nn.Linear(self.hidden_size[h], self.hidden_size[h + 1]))

# function forward
for module in self.hidden_list:
    x = module(x)
    x = self.relu(x)


m = list(your_module) then nn.Sequential(*m) is also a good choice.
example:

# function __init__
module_list = list()
for h in range(len(hidden_size) - 1):
    module_list.append(nn.Linear(self.hidden_size[h], self.hidden_size[h + 1]))
    module_list.append(nn.ReLU())
self.hidden_list = nn.Sequential(*module_list)

# function forward
x = self.hidden_list(x)


self.add_module will also work.
example:

# function __init__
for h in range(len(hidden_size) - 1):
    self.add_module("hidden{h}".format(h), nn.Linear(hidden_size[h], hidden_size[h + 1])

# functin forward
for h in range(len(self.hidden_size) - 1):
    x = eval("self.hidden{h}(x)".format(h))

These work perfectly for my case, thank you!