Is there a way to directly loop through the submodules (all layers) of the network?)

What I see is that the method modules or named_modules wrap all the layers inside shared_ptr and store them in a different address. How can I directly access these modules without calling the attribute name?

This is in libtorch. What I wanted to do is like the named_modules or modules in python, where the modules are directly the copies of the submodules.

In libtorch (the C++ version of PyTorch), you can access the modules in a similar way you’d do in Python. However, instead of using .named_modules() or .modules() like in Python, you’ll have to use the .children() or .named_children() function which returns an iterator over immediate child modules.

Here’s a small example:

auto net = torch::nn::Sequential(torch::nn::Linear(10, 50),
                                 torch::nn::ReLU(),
                                 torch::nn::Linear(50, 2));

for (auto& module : net->named_children()) {
    std::cout << module.key() << ": " << module.value() << std::endl;
}

In this example, named_children() returns an iterator over immediate child modules, providing both the name of the module and the module itself. You can also use children() if you don’t care about the names and just want the modules.

For accessing a specific module directly, you can use something like this:

auto linear_layer = net->ptr(torch::nn::Linear);

This will give you a shared pointer to the first Linear module in your network.

If you need to access all the modules (including nested ones), you might have to write a recursive function to traverse the module hierarchy. As of now, there isn’t a direct libtorch equivalent to PyTorch’s .modules() or .named_modules().

Thank you so much for your reply I just checked the children method. It still returns this type shared_ptrtorch::nn::Module. It seems like the original module (torch::nn::Linear or others alike) are upcasted to the base struct torch::nn::Module.

struct Net : torch::nn::Module {

Net():fc0(torch::nn::Conv2dOptions(1,1,2)),
fc1(torch::nn::LinearOptions(2,3).bias(true))
,fc2(torch::nn::LinearOptions(3,32).bias(true)),
fc3(torch::nn::LinearOptions(32,10).bias(true))
{
// Construct and register two Linear submodules.
register_module(“fc1”, fc1);
register_module(“fc2”, fc2);
register_module(“fc3”, fc3);
register_module(“fc0”, fc0);
}

torch::Tensor forward(torch::Tensor x) {
// Use one of many tensor manipulation functions.
x = torch::relu(fc1->forward(x.reshape({x.size(0), 784})));
x = torch::dropout(x, /p=/0.5, /train=/is_training());
x = torch::relu(fc2->forward(x));
x = torch::log_softmax(fc3->forward(x), /dim=/1);
return x;
}
for(auto & module : test_net.children()){
cout<<module<<endl;
}
cout<<&test_net.fc1<<endl;

The returned address of the for loop is 0x2d6abf0, 0x2d6eca0, 0x2d6f680.

Whereas the cout<<&test_net.fc1<<endl returns 0x7ffe2d70b668. Therefore I believe the address has been changed. Also, to access the weight attributes etc, torch::nn::Module does not provide those. I think the shared_ptr should be converted using shared_ptr->astorch::nn::Linear(). Then, it is a new address again. I was wondering how to directly access fc1, fc2, fc3, etc using a for loop without calling test_net.fc1 explicitly?