About the way to get each layer of a module and number of layers in C++

Hi, All

I am trying to apply the relu to the output of each layer in C++

The python code is like:
for i in range(len(self.layers)):
m = self.layers[i]
x_f = F.relu(m(x))

where the layers are defined as
self.layers = torch.nn.ModuleList([
LinearWeightNorm(input_dim, 1000),
LinearWeightNorm(1000, 500),
LinearWeightNorm(500, 250),
LinearWeightNorm(250, 250),
LinearWeightNorm(250, 250)]
self.final = LinearWeightNorm(250, output_dim, weight_scale=1)

However, in C++ code, if I wrote

for (int64_t i = 0; i < 5; i++)

  torch::Tensor x_f = torch::nn::functional::relu(layers[i]->as<LinearWeightNorm>()->forward(x));

I met an error in “layers[i]” .
as I checked layers->size() is 0,

Any comments for better solutions please ?

Based on the error it seems you are not initializing the ModuleList properly.
Have a look at this test to see the usage of ModuleList in C++.


When I change to
auto a=layers[0+i]->as()->forward(x);

The error reports “index out of range”
terminate called after throwing an instance of ‘c10::Error’
I defined the layers as
struct DiscriminatorImpl:torch::nn::Module {
torch::Tensor x;

torch::nn::ModuleList layers;
LinearWeightNorm final=nullptr;

int64_t input_dim=28*28;
Scalar output_dim=10;
Scalar weight_scale=1;
 DiscriminatorImpl(int64_t input_dim, int64_t output_dim)
    layers = register_module("layer", torch::nn::ModuleList(
            LinearWeightNorm(input_dim, 1000, true, 1, 0.1),
            LinearWeightNorm(1000, 500,true, 1, 0.1),
            LinearWeightNorm(500, 250, true, 1, 0.1),
            LinearWeightNorm(250, 250, true, 1, 0.1),
            LinearWeightNorm(250, 250, true, 1, 0.1)
    final= register_module("final",LinearWeightNorm(250, output_dim, true,1, 0.1));

May I ask how shall I initialize the ModuleList properly in this case please ?