Conversion from pytorch (python) to libtorch (C++) meet error no viable overloaded =

Hi, All, I have an inquiry about conversion from pytorch to libtorch (C++)

The python code is:
self.layers = torch.nn.ModuleList([
LinearWeightNorm(input_dim, 1000),
LinearWeightNorm(1000, 500),
LinearWeightNorm(500, 250),
LinearWeightNorm(250, 250),
LinearWeightNorm(250, 250)]
)
for i in range(len(self.layers)):
m = self.layers[i]
x_f = F.relu(m(x))

In libtorch code, how shall I convert the line of “m=self.layers(s)”

I use:
layers = register_module(“layer”, torch::nn::ModuleList(
LinearWeightNorm(input_dim, 1000),
LinearWeightNorm(1000, 500),
LinearWeightNorm(500, 250),
LinearWeightNorm(250, 250),
LinearWeightNorm(250, 250)
));

for
for(int64_t i=0; i<sizeof(layers);i++){
torch::nn::Module m;
m=layers[i];
torch::Tensor x_f = torch::nn::functional::relu(m(x));

I get an error in m=layers[i] saying that “No viable overloaded ‘=’”

Could anyone suggest how shall I modify the line to make it work please ?

Any comments are welcome. Much appreciated.

Thanks

Just look at the document.
https://pytorch.org/cppdocs/api/classtorch_1_1nn_1_1_module_list.html

could you please share a bit more about how to modify the line to remove the error please ? I am new to libtorch. thanks.

The structure of the document example is the same as yours. You only need to change the variable name. Why should I spend my energy to rewrite the same example?

Hi, if I follow the example to use

for(const auto &module: *layers){
torch::nn::Module m;
m=layers[i];

Then i is not iterated as the code would like to get ith layer from modulelist layers.
How shall I realize that please ? I feel it is not exactly the same as the document (although I agree they are somehow similar)