Using NN module List in C++ API

Hi,

I am aware of this wonderful functionality in the Python API to declare a simple feedforward network in a loop using ModuleList. I am now trying to port my code into C++ and having trouble replicating the code there. I am not sure how the ModuleList works in C++. Any help would be great!

def __init__(self, numLayers, numNeuronsPerLyr):
     
        super().__init__();
        self.layers = nn.ModuleList();
        current_dim = 2;

        for lyr in range(numLayers): # define the layers in loop
            l = nn.Linear(current_dim, numNeuronsPerLyr);
            nn.init.xavier_normal_(l.weight);
            nn.init.zeros_(l.bias);
            self.layers.append(l);
            current_dim = numNeuronsPerLyr;
        self.layers.append(nn.Linear(current_dim, 3));
        
def forward(self, x):
        m = nn.LeakyReLU();
        ctr = 0;
        for layer in self.layers[:-1]: # forward prop
            x = m(self.bnLayer[ctr](layer(x))); # ReLU in all hidden layers
            ctr += 1;
        out = torch.softmax(self.layers[-1](x), dim = 1); # output layer softmax
        return out 

@Aaditya_Chandrasekha
We have a simple instruction in the comment here:
https://github.com/pytorch/pytorch/blob/cd0724f9f1b57dae12be2c3fc6be1bd41210ee88/torch/csrc/api/include/torch/nn/modules/container/modulelist.h#L11

We have tests here, it contains more examples.
https://github.com/ShahriarSS/pytorch/blob/678873103191c329e2ca4a53db1d398599ad9443/test/cpp/api/modulelist.cpp

1 Like

I’m trying to use a modulelist to inside a module to create a multi-layer perceptron where the user can specify the number and size of hidden elements. Signature of constructor is:

MLPImpl(int64_t inputdim, int64_t outputdim, std::vector<int64_t> hidden)

MLPImpl has member variable

torch::nn::ModuleList layers;

I call register_module(“layers”, layers) in constructor, and add some linear layers to the “layers” member via push_Back. Looking at implementation, this should also register those layers, so I don’t separately register those.

But somehow, this won’t work when I use torch::save and torch::load to store the neural network. I.e. the linear submodules contained “layers” won’t load.

Am I missing something here?

So it works when I initialize the module with the right number of elements in “layers”, but that kind of defeats the point of saving…